Caution On Bias Arguments

“You say it’s important to overcome biases. So isn’t it hypocritical that you’re not trying to overcome whichever bias prevents you from realizing you’re wrong and I’m right?”
— everybody

Correcting for bias is important. Learning about specific biases, like confirmation bias or hindsight bias, can be helpful. But bias arguments – “People probably only believe X because of their bias, so we should ignore people who say X” tend to be unproductive and even toxic. Why?

1. Everyone Is Biased All The Time

You could accuse me of having a conservative bias. After all, I’m a well-off straight white man, a demographic well-known to lean conservative. If a liberal wanted to discount everything I say, or assume any conservative arguments I make come from self-serving motives, they’ve got all the ammunition they need.

Or you could accuse me of having a liberal bias. After all, I’m a college-educated atheist Jewish psychiatrist in the San Francisco Bay Area. All of those demographics are well-known to lean liberal. If a conservative wanted to discount everything I say, or assume any liberal arguments I make come from self-serving motives, they’re not short on ammunition either.

This is a general phenomenon: for any issue, you can think of biases that could land people on one side or the other. People might be biased toward supporting moon colonization because of decades of sci-fi movies pushing space colonization as the wave of the future, or because Americans remember the moon landing as a great patriotic victory, or because big defense companies like Boeing will lobby for a project that would win them new contracts. Or people might be biased against moon colonization because of hidebound Luddite-ism, or an innate hominid preference for lush green forests and grasslands, or a pessimistic near-termism that rejects with payoffs more than a few years out. I personally might be biased towards moon colonization because I’ve been infected with the general Silicon Valley technophile mindset; or I personally might be biased against it because I’m a Democrat and Trump’s been the loudest modern proponent of more moon missions.

This is even easier if you’re allowed to invent biases on the spot. For example, I said people are against moon colonization because of “hidebound Luddite-ism” – is that actually a thing? If I say that regulatory action against tech companies is driven by anti-tech populism, have I identified a bias, made up a bias, or just tautologically rebranded people wanting regulation of tech companies as a force biasing people towards regulation of tech companies? Won’t people who support regulation, counter by saying that opponents are just knee-jerk technophiles in who have drunk some sort of Silicon Valley hype Kool-Aid?

2. Everyone Is Hypersensitive To Biases Against Their Side, And Thinks Biases In Favor Of Their Side Are Irrelevant

This is called the hostile media effect, though it’s broader than just the media. I’ve talked about it before in against bravery debates. My favorite example is conservatives complaining that the media condemns far-right terrorism but excuses Islamic terrorism (eg 1, 2, 3, 4, 5) alongside liberals complaining that the media condemns Islamic terrorism but excuses far-right terrorism (eg 1, 2, 3, 4, 5).

Or if you prefer facts to anecdotes: according to a Gallup poll, conservatives are more likely to believe the news has a liberal bias; liberals are more likely to believe the news has a conservative bias. In a study where experimenters showed partisans a trying-to-be-neutral video on the Israel-Palestine conflict, the pro-Israel people said the video was biased toward Palestine, and the pro-Palestine people said the video was biased towards Israel.

This ties into the problem where you can just make up a bias, like “hidebound Luddite-ism”. Technophiles will see an anti-tech bias everywhere. And whenever they meet a specific anti-tech person, they can assume that their positions have been shaped not by reason, but by the anti-tech sentiments that are omnipresent in our society. Having explained away their opponents’ position as the product of bias, they’ll feel no need to debate it or question whether it might be true.

Anyone can come up with any bias for any position, but this meta-bias is going to affect people’s sense of which biases matter and which ones don’t. Pro-moon-colonizers are going to doubt that technophilia is really a problem motivating people’s reasoning, but think that hidebound Luddite-ism is a big problem motivating everyone on the other side.

3. It’s Hard To Even Figure Out What Bias Means Or When It Is Bad

Suppose A and B are debating some issue, and B is part of a group especially closely linked to the issue. For example:

1. A plumber and a teacher are debating a proposed pay cut for teachers.
2. A man and a woman are debating abortion.
3. An atheist and a Jew are debating the peace process in Israel.
4. A white person and a black person are debating slavery reparations.
5. A citizen and an undocumented immigrant are debating immigration policy.
6. King Edward and a Jew are debating whether to expel all the Jews from England.
7. You and a KKK Grand Wizard are debating whether the KKK should be banned as a hate group.
8. A scientist and a tobacco company executive are debating whether cigarettes are dangerous.

Who is more biased? A or B?

This is a tough question. If we’re just working off the dictionary definition of bias, it ought to be B. But in cases like 6, it would be pretty bad to adjust away from B’s opinion, or discount B as too biased to give a good argument.

We can’t dismiss this as “A is also affected by the issue”. It’s true that for example the plumber may lose a little money if he has to pay higher taxes to fund increased teacher salaries. But since there are fewer teachers than taxpayers, each taxpayer’s loss is much smaller than each teacher’s gain. It still seems like B should be more biased.

We could model this as two opposite considerations. A is less biased. But B may be better informed. Sometimes this is literal information: I’d expect an immigrant to know more about immigration policy than an average citizen. Other times it can be emotional “information” about how something feels; for example, a woman may have hard-to-communicate information about what makes abortion rights feel important to her.

Is it meaningful to say the Jew has hard-to-communicate information about how much he doesn’t want to be kicked out of England? Or should we just say that, as the person most affected by the policy, he’s more likely to be thinking about it clearly? But now we’ve come full circle to saying that motivated reasoning itself is good!

I have a hard time squaring this circle. The lesson I take is that it’s easy to switch between “we should trust the more affected party less” and “we should trust the more affected party more” without clear principles to guide us.

Probably most people will do this in a biased way. When their side is the more affected party, they’ll say that gives them special insight and so other people should back off. When they’re the less affected party, they’ll say that makes them unbiased and other people are just motivated reasoners. This is yet another reason to expect that bias arguments have so many degrees of freedom that everyone will figure their opponents are biased and they aren’t.

4. Bias Arguments Have Nowhere To Go

Most people are already aware of their potential biases. No straight man will be surprised to be told that they are a straight man, or that this might bias them. “You are a straight man, so consider that you might be biased” doesn’t give new information. It just deflects the conversation from potentially productive object-level discussion to a level which is likely to sound patronizing and overly personal, and which has less chance of being productive.

Someone asks me “Are you sure you don’t just hold that opinion because of the liberal Jewish milieu you grew up in?” I look deep into my brain, the opinion still sounds right, I don’t see a sticker on the opinion saying “Proud product of the liberal Jewish milieu you grew up in”, and…then what? Do I drop the opinion even though it still seems right? Do I keep holding the opinion, but feel guilty about it? Do I retort back “Aha, no, you only hold your opinion because of the conservative Gentile milieu you grew up in, so you should drop your opinion!”?

There’s a sense in which we should always be considering the Outside View (see part III here) for each of the opinions we hold. That is, on the Inside View, the opinion might still seem convincing, but on the Outside View, we might have enough circumstantial evidence that it was produced by some process uncorrelated with truth that we doubt it despite its convincingness. But just learning that there’s some possible bias should rarely have much of an effect on this process, especially since with any self-awareness we should probably have already priced all of our own biases in.

5. Where To Go From Here

I think low-effort (and even medium-effort) arguments from bias will usually be counterproductive. Second person bias arguments (“You are probably biased on this topic because X”) and third-person bias arguments (“Society is probably biased on this topic because X”) are at least as likely to perpetuate biases as to help overcome them, and less useful than just focusing on the object-level argument.

What’s left? Bias is an important obstacle to truth-seeking; do we just ignore it? I think bias arguments can be useful in a few cases.

First, it’s fair to point out a bias if this gives someone surprising new information. For example, if I say “The study proving Panexa works was done by the company producing Panexa”, that might surprise the other person in a way that “You are a straight man” wouldn’t. It carries factual information in a way that “You’re a product of a society laden with anti-tech populism” doesn’t.

Second, it’s fair to point out a bias if you can quantify it. For example, if 90% of social scientists are registered Democrats, that gets beyond the whole “I can name one bias predisposing scientists to be more liberal, you can name one bias predisposing scientists to be more conservative” arms race. Or if you did some kind of study, and X% of social scientists said something like “I feel uncomfortable expressing conservative views in my institution”, I think that’s fair to mention.

Third, it’s fair to point out a bias if there’s some unbiased alternative. If you argue I should stop trusting economists because “they’re naturally all biased towards capitalism”, I don’t know what to tell you, but if you argue I should stop trusting studies done by pharmaceutical companies, in favor of studies done by non-pharma-linked research labs, that’s a nice actionable suggestion. Sometimes this requires some kind of position on the A vs. B questions mentioned above: is a non-Jew a less biased source for Israel opinions than a Jew? Tough question.

Fourth, none of this should apply in private conversations between two people who trust each other. If well-intentioned smart friend who understands all the points above brings up a possible bias of mine in a spirit of mutual truth-seeking, I’ll take it seriously. I don’t think this contradicts the general argument, or is any different from other domains. I don’t want random members of the public shaming me for my degenerate lifestyle, but if a close friend thinks I’m harming myself then I want them to let me know. I’m realizing as I’m writing this that this paragraph deserves its own essay, and that it would probably be a better and more important essay than this one is.

Most important, I think first-person bias arguments are valuable. You should always be attentive to your own biases. First, because it’s easier for you; a rando on Twitter may not know how my whiteness or my Jewishness affects my thought processes, but I might have some idea. Second, because you’re more likely to be honest: you’re less likely to invent random biases to accuse yourself of, and more likely to focus on things that really worry you. Third, you have an option besides just shrugging or counterarguing. You can approach your potential biases in a spirit of curiosity and try to explore them. I think I’m probably biased against communism because many communists I met have been nasty people who tried to hurt me, so I try to solve that by reading more communist books and seeking out good communist arguments wherever I can find them. Second- and third-person bias arguments risk feeling some kind of awkward option to change your opinions to something you don’t really believe in order to deflect someone’s bias accusations. First-person bias arguments should lead to a gradual process of trying to look for more information to counter whatever motivated reasoning you might have.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

155 Responses to Caution On Bias Arguments

  1. encharitimone says:

    “Bias is an important obstacle to truth-seeking”

    I would suggest that it’s important to note the primary mechanic by which it is an obstacle: as an impediment to our information stream, either in the form of not *seeing* information supporting some alternative view (e.g. exclusively watching Fox News, or Slate), or in the form of *devaluing* a stream based on a preexisting assessment of its source.

    That suggests a way to make second-person bias arguments more constructive: they should address a specific (perceived) fact, and suggest a specific source of information offering a different perspective.

    For example, if I say, “I support Israel because their military and police actions have been proportional to their provocations and the need for public safety”, and someone replies “you only believe that because you are a white Christian”, I’m probably going to ignore them. If they say “as a white Christian you might be missing some critical information” and point me towards reports from a humanitarian organization, that’s going to feel worth listening to.

    Of course, this only works when the discussion is premised on specific arguments/facts (e.g. “…because their military and police actions have been proportional to their provocations and the need for public safety”). But if that type of content is missing from the discussion, there are bigger issues at play than bias.

    • len says:

      It’s not just different information stream. Bias often implies different values. Consciously or not, a pro-Palestinian person may believe that Israelis are the out-group and care about their welfare less than Palestinians.

      The sort of person who believe economists are biased towards capitalism probably also believe that this is because economists care about economic progress more than they care about poor people.

      Similarly, bias because you’re part of a group close to the issue (a teacher talking about teacher salaries) will tend to overvalue their own welfare. It’s certainly not because of a lack of or biased information. They simply care more about their own welfare than that of taxpayers in general.

    • IvanFyodorovich says:

      Certainly information streams are part of it, but some is also that we do not have perfectly consistent moral views or evidentiary standards. Politician X is accused of non-ideological bad thing Y. If you like politician X, you will try to find evidence that X didn’t do Y, or that the evidence falls short of absolutely perfect, or that Y is really not much worse than Z which a politician on the other side did and . . .

      In a way, the non-ideological scandals are like a calibration test for your own bias. How serious is it if a person claimed to be Native American on a law school application on the basis of family lore? If a president owns a hotel where a foreign diplomat stays, is that a violation of the emoluments clause and what is the proper remedy? If you always find yourself saying that the scandal on your team is nothing and the scandal on the other side is worse than Watergate, you probably shouldn’t trust your ability to assess more ideologically charged information either. (Attribution: got this argument from Jonathan Chait. Now do you like it more or less : )

  2. Jiro says:

    I think the citizen versus the undocumented immigrant doesn’t fit. (It also shouldn’t be using the question-begging euphemism “undocumented”, but thats a separate point). In most arguments, it is assumed that by default, you should weigh the preferences of both sides equally. Bias is something that leads you to not do this. But in this particular argument, the citizen has as a premise that the nation exists for the sake of its citizens and that the two sides’ preferences should not be weighed equally. The default doesn’t exist as a default, and deviating from it is, unlike in most arguments, legitimate and not “bias”.

    The scientist versus the tobacco company executive also doesn’t fit. Whether cigarettes are dangerous, as that question is normally understood, is a factual question. Much of your analysis doesn’t apply to factual questions–in deciding what to do to Jews it’s relevant that Jews are harmed, but in deciding a factual claim about X it is irrelevant that concluding “yes” for the factual claim would harm someone.

    • Purplehermann says:

      Huh. I read all of the cases as factual disagreements, not whose preferences matter more. Which side has stronger motivation to get a specific answer only matters in the sense of assuming they are more educated about the subject and/or are messing up in their arguments.

      • niohiki says:

        Well, a lot of them are clearly moral, no? There is no fact to be debated about whether we should expel some ethnic group or raise taxes, we either do it or we don’t. That is not the same as asking about facts such as whether the ethnic group correlates with crime or controls the economy, or whether taxes create unemployment or improve living conditions.

        Still, I cannot agree with @Jiro either, when saying

        but in deciding a factual claim about X it is irrelevant that concluding “yes” for the factual claim would harm someone

        In an empty, thought-experiment world, yes, sure. But of course people want to “prove” facts in order to support their preferences, and since this is a post about how real world people should not abuse the “bias argument” in their already highly toxic real world arguments… I think that tobacco and immigration are, in fact, pretty appropriate for a post written in 2019.

        • Purplehermann says:

          You are right that in the expulsion of the Jews example one angle could be to communicate just how bad expulsion would be for them. The angle of argument that i assumed Scott was referencing is the pros and cons of Jewish expulsion for the country.
          I assumed this because 1. That’s the type of argument that makes sense for the discussion at hand.
          2. I’m skeptical of the use in pleading not to kick out the Jews, but think pointing out benefits can work

          • niohiki says:

            Yes, I see what you mean, and there will be for sure a part of that in such a discussion. The thing about 2 is that the benefits for one side may be “because not kicking them out is the humanitarian thing to do and that makes us feel good”, which is strictly a fact, but not really a subject of debate on the factual level.

            I actually think that the ambiguity on factual vs moral in the questions is central to the whole issue, since it relies on the misuse of facts to justify personal preferences.

    • Simon_Jester says:

      I think the citizen versus the undocumented immigrant doesn’t fit. (It also shouldn’t be using the question-begging euphemism “undocumented”, but thats a separate point). In most arguments, it is assumed that by default, you should weigh the preferences of both sides equally. Bias is something that leads you to not do this. But in this particular argument, the citizen has as a premise that the nation exists for the sake of its citizens and that the two sides’ preferences should not be weighed equally. The default doesn’t exist as a default, and deviating from it is, unlike in most arguments, legitimate and not “bias”.

      I don’t think that’s a very good definition of bias.

      Many forms of bias don’t involve failure to equally weight preferences. They can involve deliberate blindness to factual information (“I refuse to believe that Policy X increases the crime rate”).

      Or discounting factual information that one would not ordinarily discount (“I support Policy X despite the fact that it saves us A dollars and increases the crime rate by C, but I also support Policy Y despite the fact that it also saves us A dollars and increases the crime rate by C.”)*

      Or in extreme circumstances, crafting entirely different sets of rules to describe the group against which we are biased (” A man who does X consistently is ‘forceful,’ while a woman who does the same thing equally consistently is ‘a ball-buster.’ “).

      The bare fact that I believe that Group Z should not have rights doesn’t mean I can’t reasonably be accused of bias against Group Z that distorts my perspective.
      _____________

      *(Obviously real world cases are more nuanced than this).

    • Emby says:

      Do you notice that “King Edward vs Jew” is functionally identical to “citizen vs undocumented immigrant”?

      In both cases, on the left we have “person who, by virtue of their indelible group membership, is allowed to (help to) decide government policy” On the right we have “person who, by virtue of their indelible group membership is considered not a proper member of their society”

      The citizen can decide that the priority is to serve the interests of citizens. King Edward can decide that the priority is to serve the interests of Christians. It’s the same situation in different clothes.

  3. Clutzy says:

    Bias talk only interests me if there is a fun game being played:

    Debate a person where sourcing is mandatory, and you can only use sources that are controlled by your political opposition.

    • niohiki says:

      I have actually debated using those rules. It becomes quickly boring (although it is not a bad exercise in rhetoric). People show biases because selecting your sources is really effective. So it degenerates into avoiding concrete use of your given sources as much as possible and waiting for the other to make a mistake you can capitalize on, instead twisting your opposition’s sources to your advantage as one would have hoped for. I know it’s anecdotal evidence, but I think it matches common sense expectations.

    • broblawsky says:

      That seems like a good way to dismiss all of your opponent’s sources as being irredeemably biased, regardless of whether they actually are.

      • Clutzy says:

        No, its just an exercise that helps illustrate that “bias” is more often a result of ignoring stories rather than reporting falsely on stories.

        I’ve done NYT vs. WSJ and CNN vs. Fox sourcing debates and they are incredibly useful.

        Once you get to something like Daily Caller vs. Huffpo it becomes really hard.

  4. Reminds of the rationalist version of this: “Everybody else has biases, I have priors.”

    But the first-person vs third-person approach seems like a sensible heuristic. But it’s limited by a lack of deeper awareness of how biases actually operate at the cognitive level. We’re too easily seduced by the list of biases with cute names. But most of these biases are just ex-post unhelpful versions of cognitive frames and schemas. And without cognitive frames, you don’t have cognition. So, any conversation about bias is always going to be a utilitarian discussion about the helpfulness of certain mental schemas. And these schemas will always have a cultural and personal component, and they will change over the course of history as well as a single conversation.

    Maybe a better way to think about it is identifying our broader frames and schemas and investigating what input they provide into our reasoning and emotional reactions. But ultimately, because epistemology is ethics, we will have to make personally political decisions about what to do.

    Quick reading list:
    Lakoff: Women, Fire and Dangerous Things and/or Moral Politics – on what categories and frames look like
    Fauconnier and Turner: The Way We Think – on how frames and schemas are integrated during cognitive interactions
    Rorty: Contingency, Irony and Solidarity – on the ethical dimensions of epistemology

    • AC Harper says:

      A suggestion for additional reading (or even a Slate Star Codex review) is ‘The Mind is Flat’ by Nick Chater. He proposes that we improvise a lot when speaking to other people because we rarely marshal more than the gist of any idea.

      My gist: We all suffer from the grand delusion that what we think is real is actually really real. But we all have to find ways of navigating the really real through improvisation. Over time our improvisations build the strengths of our discriminatory neural networks into beliefs, but these are only triggered by bottom up perceptions and are not properly available to introspection. So (perceived) biases are the results of many prior contexts.

  5. Brandon Berg says:

    Totally beside the point, but the philosophy of Luddites is just called Luddism.

  6. Clegg says:

    Bulverism is arguing why someone is wrong, rather than that they are wrong. This seems to be what “Bias Arguments Have Nowhere To Go” is getting at.

  7. OxytocinLove says:

    I don”t think it’s true that most people are already aware of their potential biases. I’ve been on both sides of being surprised by a “Do you think that viewpoint is influenced by your race/gender/class/background?”

    This mostly works when the person is making what they think is a general statement about most human experience that turns out to be specific to their group (something like typical mind fallacy, but instead it’s typical experience fallacy). To give the least CW-y toy example I can think of…

    A: Why are you worried about how to get laid? All you gotta do is walk up and be like “Nice shoes, wanna fuck?” and they just throw themselves at you.
    B: Have you considered that that only works for you because you’re young, conventionally attractive, and wealthy?
    A: Shit, yeah, you’re right.

    • notpeerreviewed says:

      I was debating how useful the concept of “privilege” is recently, and I found it came down to conflicting assumptions about how often the kind of exchange you just described actually works out that way. If it’s really common for people to overlook things in that way, and that all they need in order to realize the mistake is for that to be pointed out, then privilege is an extremely useful concept; otherwise less so.

      • Eponymous says:

        Incidentally, I think that the concept of “privilege” mostly functions as a device to shift the focus from the disadvantages that accrue to individuals because of their membership in certain groups, to the advantages that accrue to others. Of course these are logically equivalent — if you believe that black people face discrimination, then logically it follows that white people possess certain privileges, at a minimum not facing this same discrimination. But the shift in emphasis matters rhetorically and in how we think about things.

        I actually think this shift (in both directions) can be useful to change perspective on an issue, though usage has given the term a whole slew of connotations, to the degree that I wonder if another term for the same concept might be useful.

        • albatross11 says:

          There’s a really useful insight in there–the fact that your observed world and mine may be pretty different because the world presents itself differently to different people. Men hardly ever get catcalled, for example, so you might go through life assuming that people didn’t ever get catcalled in the places where you hang out, even if that happens pretty often.

          But the way privilege is used in arguments is like 95% destructive.

        • notpeerreviewed says:

          Relevant enough in this thread that I feel comfortable plugging my own review of “The Perils of Privilege”, by Phoebe Maltz Bovy:

          https://notpeerreviewed.wordpress.com/2019/06/25/privilege-is-real-and-we-need-to-stop-talking-about-it-so-much/

        • Aapje says:

          @Eponymous

          It also seems to better allow framing differences that have both upsides and downsides as oppression, by just focusing on the downsides for one group and the upsides for another group.

    • J Mann says:

      IMHO, the argument more often grinds into a debate about both sides’ experience and their understanding of the other side’s experience. To stick with your example, but move it towards some real disputes I’ve seen.

      Incel: Nobody loves me and nobody ever will. I’m doomed to die alone because I don’t have the qualities people are looking for in a romantic partner.

      Former Incel: Don’t give up! I used to feel that way, but I made an effort to get out there and examine my preconceptions, and it was hard work and some luck, but now I’m in a happy relationship!

      I: You’re biased because you’re better looking and have more spoons than I do. You just thought you were an incel and you don’t understand how hard it is for me.

      FI: Well you’re biased because you’re in the middle of depressive spiral. And you don’t understand how hard it was for me.

      At which point, both sides give up on communication.

      • Jon Gunnarsson says:

        What’s this about spoons? Apparently I haven’t been keeping up with my incel terminology.

        • Corey says:

          I believe it’s a Tumblr thing, at any rate it’s not limited to incels.
          The spoon is a unit of motivation to overcome obstacles, roughly. Sort of like the ego-depletion theory of willpower.
          So if you’re out of spoons, you won’t do unpleasant things, such as risking rejection by asking someone out. (Or cleaning up, etc.)
          Low spoons lead to akrasia, that is.

        • J Mann says:

          Sorry, spoons aren’t an incel thing, they’re a tumblr disability thing.

          For more info, see this Ozy post, but it’s shorthand for units of emotional resources needed to respond to challenges, plan a way out, etc.

        • Jaskologist says:

          It’s Millennials using the concept of “limited budget” but having to reinvent it from scratch because their parents apparently never taught it to them.

        • J Mann says:

          @Jaskologist – I took it more as a term for discussing units of grit, like “utils” for measuring utility.

          You’re right that the underlying point is that there’s a limited budget of grit, and that some people have more challenges they need to spend their grit on, or have to spend more grit to get the same result.

          (And they may have less grit too, but that’s not the point most “spoons” users want to make.)

        • Jaskologist says:

          The origin was this essay from the early days of the web, and it was specifically about living with debilitating illnesses, and how they require you to budget your energy expenditures because you have less to spend than healthy people. I hope it hasn’t drifted to the point that it just refers to matters of motivation and akrasia.

          To be clear, I do think it’s an important concept to have a name for, and I guess if they’d just said “energy budget” it wouldn’t have caught on; I still feel like they went with a spoon illustration because they just didn’t have the concept of “not enough money.”

          To be clearer, I enjoy ragging on Millennials as much as the next guy, but the blame for this lies squarely on their parents.

          • Simon_Jester says:

            @Jaskologist

            The key differences include the following: Money is finely divisible. Money can easily be borrowed or saved up. And you can usually find SOME way to make incrementally more money by working harder.

            ‘Spoons’ are not necessarily finely divisible (depending on how the metaphor is interpreted). They are definitely not something you can borrow or save up very effectively. And, importantly, willpower budgets are quite resistant to efforts to “increase the willpower pool.” There are vast swarming legions of people in the developed world who find themselves limited by their willpower reserves, and who would dearly love to increase said reserves by any means available, but appear to know none.

            Money is at the heart of a huge web of complicated realities that makes it harder to use as a metaphor for the concept ‘spoons’ is used to discuss. ‘Spoons’ themselves work as a metaphor because they strip away a lot of misleading questions that obscure the issue.

          • albatross11 says:

            There are also widely deployed commercial strategies for making even looking at some information, or considering some set of options, such a spoon-sink that people avoid doing it. See cellphone billing plans for an easy example. (Do I have the time, attention-span, and desire to burn a couple hours carefully selecting the best options here, or do I just want to leave what I have in place, knowing that it works but I’m probably paying an extra 10-20%?).

      • notpeerreviewed says:

        Now I’m not sure if you’ve seen my online arguments with incels or I’m just a cliche.

        • J Mann says:

          I spent a couple days reading /r/incel and /r/incels back when everybody was talking about the movement, so it’s possible that I read your discussions.

          (But yes, they do fit a pattern – trying to break someone out of a hopelessness spiral is hard work but IMHO worthwhile, so good on you).

    • DinoNerd says:

      *sigh* My employer, like many large companies, has mandatory “compliance” training, with training required every year. I’ve always thought the purpose was simply to avoid being sued – “see, we told them not to bribe government officials; it’s not the company’s fault”. But now I’m failing to get around to taking my mandatory training on unconscious biases. (I figure I’ll wait till I’m already in a horrifically bad mood, before taking it. Because I’d bet real money that the class is oversimplified and inaccurate, and I hate parroting untruths as part of my job, even when they are merely oversimplifications.)

  8. Andrej Bjelaković says:

    “…so I try to solve that by reading more communist books and seeking out good communist arguments wherever I can find them.”

    With regard to that, in case you haven’t come across it:
    https://jacobinmag.com/2019/01/karl-marx-engels-capitalism-political-economy?fbclid=IwAR2j4Ox4K0z4SV_5z5wuX1YID4ZHhumYsll4WGZ71RfgB8PAfVeKPMvcI9E

  9. deciusbrutus says:

    Bias Bias Fallacy Fallacy.

    • deciusbrutus says:

      Or, in the full explanation: That people reliably identify bias among their ingroup as much more allowable than bias among their outgroup is not a reason why the beliefs that they have are wrong.

  10. fion says:

    Possible typo: “a pessimistic near-termism that rejects with payoffs more than a few years out” missing a word?

    • JulieK says:

      I think the problem is that near-termism isn’t a standard word. Maybe “near-term perspective,” or if you must, “near-term-ism?”

      (For more hyphen geekery, see this discussion in The King’s English.)

      • fion says:

        I should have been clearer. That part wasn’t what I was commenting on. I was referring to “that rejects with payoffs”, which I think should read something like “that rejects options with payoffs” or “that rejects projects with payoffs” or something along those lines.

  11. Erusian says:

    I like CS Lewis’ idea, which is that before you attempt to investigate bias you must first resolve who is correct. As accusations of bias are almost universally ad hominem attacks, they should not be part of a good argument anyway. For example, let’s say a plumber and a schoolteacher are debating raising tax rates by 10% and giving the money to schoolteacher salaries because the schools horribly underperform. The schoolteacher claims the increased salary will improve grades. The plumber claims it will not. One of these is true and one of these is false. Before we can decide whether the school teacher is using motivated reasoning, we must determine they are wrong independent of the fact they are a school teacher.

    I will quote CS Lewis here him here:

    You must show that a man is wrong before you start explaining why he is wrong. The modern method is to assume without discussion that he is wrong and then distract his attention from this (the only real issue) by busily explaining how he became so silly.

    In the course of the last fifteen years I have found this vice so common that I have had to invent a name for it. I call it “Bulverism”. Some day I am going to write the biography of its imaginary inventor, Ezekiel Bulver, whose destiny was determined at the age of five when he heard his mother say to his father — who had been maintaining that two sides of a triangle were together greater than a third — “Oh you say that because you are a man.” “At that moment”, E. Bulver assures us, “there flashed across my opening mind the great truth that refutation is no necessary part of argument. Assume that your opponent is wrong, and explain his error, and the world will be at your feet. Attempt to prove that he is wrong or (worse still) try to find out whether he is wrong or right, and the national dynamism of our age will thrust you to the wall.” That is how Bulver became one of the makers of the Twentieth Century.

    […]

    Suppose I think, after doing my accounts, that I have a large balance at the bank. And suppose you want to find out whether this belief of mine is “wishful thinking.” You can never come to any conclusion by examining my psychological condition. Your only chance of finding out is to sit down and work through the sum yourself. When you have checked my figures, then, and then only, will you know whether I have that balance or not. If you find my arithmetic correct, then no amount of vapouring about my psychological condition can be anything but a waste of time. If you find my arithmetic wrong, then it may be relevant to explain psychologically how I came to be so bad at my arithmetic, and the doctrine of the concealed wish will become relevant — but only after you have yourself done the sum and discovered me to be wrong on purely arithmetical grounds. It is the same with all thinking and all systems of thought. If you try to find out which are tainted by speculating about the wishes of the thinkers, you are merely making a fool of yourself. You must first find out on purely logical grounds which of them do, in fact, break down as arguments. Afterwards, if you like, go on and discover the psychological causes of the error.

    • Nick says:

      The Bulverism article is apt, yeah.

      On a related note, I feel like today’s article, like yesterday’s, is a long exploration of what’s wrong with a long ago identified fallacy that ought to be well known among this community, given our roots. It’s kind of frustrating that Scott has to write them at all. Like, I can’t wait for tomorrow’s “Why descending into name-calling is wrong” or next week’s “why you shouldn’t necessarily affirm the consequent,” you know?

      • Corey says:

        “There are no evil mutants” could use a periodic refresher around here.

        • Simon_Jester says:

          That one’s very useful but keeps running into problems: namely, that it would be far more accurate to say “there is no large demographic of the population that consists of evil mutants.”

          There are always going to be individuals whose behavior is so shockingly abnormal that intuition says “that’s an evil mutant” and even science and logic find themselves scratching their heads and saying “well, we know how this person got to be the way they are, but they’re still so different from regular people in the same situation that that just means they’re an evil mutant with a well defined origin story.”

          And once you see individuals acting like evil mutants, it’s [i]really easy[/i] to generalize that larger demographics are full of evil mutants. Very easy slippery slope to slide down.

    • broblawsky says:

      What about arguments that are impossible to definitively resolve? Especially those that are irresolvable due to one side relying on sources that are fundamentally dishonest? It’s totally fair to ask someone if they’re giving greater weight to arguments because they come from their in-group. That’s the heart of all reasonable bias arguments.

    • Sniffnoy says:

      Yes, exactly this.

    • Garrett says:

      One thing which makes this difficult to do is that many such discussions involve changes which are hard to reverse, and predictions about the future.

    • Gerry Quinn says:

      Interesting that the Ur-Bulverism (as observed by the young Bulver) is an accusation of ‘mansplaining’.

  12. Radu Floricica says:

    Fourth, none of this should apply in private conversations between two people who trust each other. If well-intentioned smart friend who understands all the points above brings up a possible bias of mine in a spirit of mutual truth-seeking, I’ll take it seriously. I don’t think this contradicts the general argument, or is any different from other domains. I don’t want random members of the public shaming me for my degenerate lifestyle, but if a close friend thinks I’m harming myself then I want them to let me know. I’m realizing as I’m writing this that this paragraph deserves its own essay, and that it would probably be a better and more important essay than this one is.

    First I realized constructive criticism is a good thing, and trained myself to take it well. Then I realized it’s a very rare thing, and that most people will avoid telling you even obvious things. And there are good reasons for this inhibition – the expected utility for criticism is well below zero. So the next step was to tell my friends I want their comments to be as open as possible – which of course is almost useless by itself. So I kept repeating that, and made a point to react as positive as possible to any criticism. At the very least a loud “thank you”, and ideally being extra nice for a while. It takes a bit of willpower btw, it’s definitely not coming naturally.

    • Joseph Greenwood says:

      This is a good approach towards friends. Does it work with strangers, especially strangers on the internet? In my experience, they are still happy to criticize, but rarely make an effort to keep those criticisms constructive.

      • Radu Floricica says:

        As a self-discipline thing, yes, very much. To learn to seek to be proven wrong, because it’s a step forward. But few places manage to do this well. Many aren’t even really arguing with you, they seem to be arguing just to get cheers from their own side.

  13. Watchman says:

    Whilst in general it makes sense to assess the relevance of biases, there is though a risk in trying to assess biases on grounds of a label. The example you give of an immigrant being familiar with the immigration system shows this clearly: what if the immigrant hired someone else to arrange their immigration (or an employer did it for them etc)? What if the immigrant spent six months in a camp unsure of what was going on until they were let out? What if the system is so Kafkaesque that local knowledge is required to understand it? By considering the label immigrant as the basis for assessing a person’s biases, we are likely to impose our own biases about the system and how people interact with it, whereas a rational approach would be to enquire into the immigrant’s experience and knowledge first. Likewise, why should we assume a Jew has knowledge of or even interest in Israel? That’s a bias (interestingly one of the biases seemingly underlying the British left’s current antisemitism issues appears to be exactly this) that I imagine could be extremely frustrating for Jews without an interest in Israel. Knowing only that the person you are talking to is a Jew or an immigrant might predispose you to believe they will have certain biases and knowledge, but without having this demonstrated through your interaction, this is your own bias. I’d suggest a fallacy of lived experience exists here, whereby we assume that because a label can be applied to someone we can assume a set of experiences based upon that label, and this assumption preconditions our reception of what that person says.

    Note existing preconceptions about biases need not always be an issue. King Edward III and the Grand Wizard of the KKK are both known individuals/office holders about whom it is possible to have a good idea of their biases through their known actions or just the role they hold. Neither are likely to be strong supporters of universal suffrage for example. But there’s a major difference in the information you can get from an individual name or a post holder in a focussed organisation and the information you get from a label such as Jew or immigrant. In practical terms you should assume no special expertise or bias on behalf of the immigrant about immigration processes without ascertaining these exist first, which is exactly what you should do with any other individual, as otherwise you approach the conversation blinded by your own biases about the person.

    Ultimately, if I tell you you’re going to discuss immigration with an immigrant, you probably will assume their position and experience on this (I know I would). If you then meet the immigrant and they’re wearing the uniform of the Grand Wizard of the KKK that might cause issues of cognitive dissonance. If he then throws back his hood to show he is in fact Edward III (look, we’ve established he’s a wizard so this apparent immortality is totally explicable…) who actually immigrated in 1825, then all your preconceptions are shattered, and you are likely at a disadvantage in your conversation with a violent monarchist with presumably racist views. The risk of letting your biases suggest what an individual will think or know about an issue on the basis of a label that only reflects one aspect of an individual is that you think you’re dealing with a cipher, not a real person.

  14. evocomp says:

    While I usually find bias arguments to be counter-productive I think monetary incentives are super-relevant. The question “How much of your income depends on you believing X?” can be a useful way of quantifying bias.

    It’s good to listen to both the Uber investor and the local taxi driver about whether Uber should be allowed in your city, as they each likely have privileged information or unique direct experiences to offer. But I’m a lot more likely to trust the average commuter because both techie and taxi driver are 100% financially dependent on that answer. Tobacco companies on anti smoking laws, TurboTax on tax legislation, Disney on copyright, anyone with millions to billions riding on a single issue is suspect.

    But while monetary incentives are useful in deciding who to trust, it’s still better to debate the arguments instead of just accusing bias. Biased people can be right on occasion, but a wrong argument is always going to be wrong.

  15. VirgilKurkjian says:

    Hm, what’s the counterspell?

  16. e.samedi says:

    I assume “bias” is meant here in the colloquial sense rather than “cognitive bias”. The examples cited are well-trodden territory in logic, in which the fundamental principle is that you consider arguments not people. Hence, there are the well-known fallacies “ad hominem” and the “genetic fallacy”. To broaden this principle, good arguments and evidence are what matter, not who is offering them.

    Bias is still very important, but in my view, primarily for self-understanding. Recognizing one’s actual motivations, rather than one’s rationalizations, is a key element of wisdom. To that end a good rule of thumb might be to grant everyone the benefit of the doubt except yourself.

  17. jasmith79 says:

    I think the core problem here is this: it’s very tempting to try to win arguments.

    If you successfully shift the argument from the object-level to questioning motive then you’ve put your opponent on the defensive. And if your goal is to win/shame, maybe that’s not such a problem. But I think most people engage in these conversations (or would claim that they do) for the purpose of persuasion.

    I mean, sure, a lot of times this sort of motive-questioning is just a naked attempt to shut down/shout down an unliked viewpoint, but I think this post is suggesting that this is a more pervasive problem, and that has certainly been my experience (mea culpa).

    People aiming to persuade who end up trying to win have taken their eye off the ball.

  18. Iain says:

    There are two important pieces here that you don’t quite attach together.

    1. Everyone is naturally inclined to be biased in favour of their own side.
    2. First-person bias arguments are valuable.

    Overcoming bias is predominantly a first-person activity. The corollary to these statements is that you should be especially suspicious of arguments that flatter your side, or that accuse people of bias against you, and consciously give the benefit of the doubt to claims of bias against people unlike you.

    This is, in a nutshell, the straight white cis male tech-worker’s guide to social justice.

    • lvlln says:

      Shouldn’t this be everyone’s guide to social justice? I don’t see why this guide applies particularly to straight white cis male tech-workers or any particular combination of those descriptors.

      • TakatoGuil says:

        Not fully. As a gay man, there’s no reason for me to be extra suspicious of an argument that accuses someone of being biased against me. I should do due diligence, because otherwise I’m easily manipulated, but I’m sad to say that experience suggests that most people accused of hating me do. A straight person hearing an argument that someone else hates straight people should be suspicious because even within the set of non-straight people, such beliefs are rare.

        I imagine that folks who don’t fit into the “straight white cis male” bucket could make at least somewhat similar arguments, though the degree and questions they focus on might be different than mine.

        Can’t say I know especially what “tech worker” has to do with it, unless Iain is implying that tech workers are especially likely to fall for arguments of flattery or hatred.

        • As a gay man, there’s no reason for me to be extra suspicious of an argument that accuses someone of being biased against me.

          As a gay man, you have an incentive to believe that almost anything that goes wrong in your life is due to bias against you rather than your own faults. And similarly for any characteristic which might lead people to be biased against one.

          The belief that what happened was due to bias might, of course, be true. But there is unlikely to be anything much you can do about other people’s biases against you, whereas there may well be things you can do about your own faults. So you should cultivate a bias towards interpreting things going wrong as due to your own faults whenever that explanation is at least moderately plausible.

          • dick says:

            Agree with HBC, that was off base. Being the target of bias doesn’t necessarily make you more prone to blame your problems on bias, any more than being short makes you more prone to blame your problems on being short.

          • Tibor says:

            @HeelBearCub: Strange. To me that statement sound almost obviously true and yet you seem to find it laughable.

            Of course everyone has an incentive to believe that anything wrong in their life is not their own fault but other peoples’ fault. That’s the easiest way to deal with one’s worries, regrets, etc. which is why it is so very popular. If you convince yourself your problems are someone elses’ fault and beyond your power, there’s nothing you can do about it (except for being bitter and blaming those other people) so there’s no need for you to make any effort to improve your situation. Of course, as David mentions, sometimes this might be a correct belief, but one has an incentive to hold that belief regardless of whether it is true or not.

            I know quite a few people personally who use this kind of motivated reasoning as an excuse not to try and solve their problems. It is such a common behaviour that I find it hard to believe you don’t know such people yourself.

            Also, being rude in abbreviations is still being rude.

          • HeelBearCub says:

            I know quite a few people personally who use this kind of motivated reasoning as an excuse not to try and solve their problems.

            Is it because they are gay? Some form of minority?

            Or are they simply finding something to rationalize around? And is it because they are simply human?

            In other words, if TakatoGuil is doing this, which we have zero evidence of, it has nothing to with him being gay.

            Friedman was being rude. I just called it out.

          • Aapje says:

            @HeelBearCub

            Any (observable) property that is linked to being unreasonable disliked can be pointed to, to argue that being disliked is because others are being unreasonable.

            The stronger the link is believed to be, the more eager people seem to be to blame the property, rightly so or incorrectly.

            Or are they simply finding something to rationalize around?

            Rationalizations are not random. They have to fit with personal and/or cultural beliefs.

            The less strong the belief is that there is a link between a trait and being treated unreasonably, the weaker my claim that I was being treated unreasonably due to that trait & the more likely it is that others will push back against that claim or that my own mind will push back against it.

          • HeelBearCub says:

            Interesting. Apparently my original comment is “awaiting moderation”.

            I guess I have an upcoming ban to which I can look forward.

            (Side note: Does anyone else find the lengths one has to go to to avoid a hanging preposition irritating?)

          • any more than being short makes you more prone to blame your problems on being short.

            Being short obviously makes one more prone to blame any problems that could be due to being short on the fact that you are. Tall people with problems don’t blame them on their being short.

            The obvious example would be a short man who had trouble getting women to go out with him. It could be due to women preferring tall men, and that’s a pleasanter thing to believe that that it’s because he has low social skills or bad breath.

          • LadyJane says:

            Is it because they are gay? Some form of minority?

            Or are they simply finding something to rationalize around? And is it because they are simply human?

            In other words, if TakatoGuil is doing this, which we have zero evidence of, it has nothing to with him being gay.

            Okay? No one claimed that gay people are more likely to blame their problems on external factors, so you’re basically preaching to the choir here.

            The discussion was about whether it makes sense for a gay person to assume their problems are a result of discrimination. The matter of homosexuality itself is entirely incidental and has nothing to do with the arguments here: you could easily swap out being gay for being tall, short, skinny, fat, White, Black, Christian, Muslim, Jewish, atheist, or any other trait that people could conceivably face discrimination for. But you’re acting as if one side was arguing something like “gay people are just more sensitive than straight people,” even though no one so much as hinted as that notion.

            I agree with TakatoGuil: Homophobia is a real thing and it makes sense not to immediately discount the possibility that someone might be biased against a queer person for their sexual orientation. David Friedman and Tibor seemed to be attacking a strawman of his position, since he never said that he blames all or most of his problems on discrimination, just that he wouldn’t rule the possibility out if there was reason to believe that discrimination might be a factor. But you’re attacking an even more flimsy strawman of their position by acting as though they were making specific claims about the behavior of gay people in general.

          • HeelBearCub says:

            @Lady Jane:
            This was the sentence I objected to:

            As a gay man, you have an incentive to believe that almost anything that goes wrong in your life is due to bias against you rather than your own faults.

            If you don’t think that sentence is making a statement about some quality of gay men, I don’t know what to think.

            Consider some other examples of the construction.
            “As an astronaut, you are likely to have experienced weightlessness”

            “As an someone about to be tried for a crime, you have an incentive to retain legal counsel”

            One confusion may be that you aren’t seeing my original post replying to Friedman because it is apparently being hidden.

          • The line I responded to was:

            As a gay man, there’s *no* reason for me to be extra suspicious of an argument that accuses someone of being biased against me.

            (emphasis mine)

            HBC wrote:
            (quoting me)

            As a gay man, you have an incentive to believe that almost anything that goes wrong in your life is due to bias against you rather than your own faults.

            And responded

            If you don’t think that sentence is making a statement about some quality of gay men, I don’t know what to think.

            Correct, you don’t.

            The quality of gay men it is making a statement about is a quality shared with pretty nearly everyone else, as should be obvious given the context of the discussion–the tendency to explain away unfavorable responses as due to someone else’s bias.

            The line I responded to implied that, in his special case, that wasn’t a problem. I was pointing out that, in his case as in others, it was.

            David Friedman and Tibor seemed to be attacking a strawman of his position, since he never said that he blames all or most of his problems on discrimination

            Nothing I said implied that he did–speaking of attacking a straw man. Only that he had an incentive to believe that things that go wrong were due to prejudice against him–not that that incentive led to his routinely doing so. I have lots of incentives to attribute things due to my own faults to external factors—hopefully I resist most of them.

          • LadyJane says:

            If you don’t think that sentence is making a statement about some quality of gay men, I don’t know what to think.

            I don’t see how anyone can interpret that as a categorical statement about gay men without aggressively ignoring the context of both the previous post and the rest of that post. It’s very clear Friedman is saying “some people are likely to blame their problems on external factors, and for the subset of those people who happen to be gay, anti-gay discrimination is likely to be one of the external factors they blame.” He literally says “And similarly for any characteristic which might lead people to be biased against one.” after the sentence you quoted, and later gives an example of a different characteristic (being short). I honestly don’t see how you can interpret it as “gay men are more likely to blame their problems on external factors because that’s an inherent quality of gay men.” That’s a hilariously bad faith assumption, to the point where it seems like you’re actively going out of your way to come up the worst possible interpretation of the statement.

            I think Friedman’s argument does come across as minimizing the severity and/or prevalence of homophobia in our society, whether or not he intended it that way. But your critique makes it out to be a blatantly and overtly homophobic statement in its own right, which is ridiculous.

        • lvlln says:

          Not fully. As a gay man, there’s no reason for me to be extra suspicious of an argument that accuses someone of being biased against me. I should do due diligence, because otherwise I’m easily manipulated, but I’m sad to say that experience suggests that most people accused of hating me do. A straight person hearing an argument that someone else hates straight people should be suspicious because even within the set of non-straight people, such beliefs are rare.

          But the points that Iain pointed out still stand in this context:

          1. Everyone is naturally inclined to be biased in favour of their own side.
          2. First-person bias arguments are valuable.

          Specifically, bullet 1 means that you, like everyone else, are naturally inclined to be biased in favor of their own side, and that includes your own perception of your experience suggesting that most people accused of hating you do.

          Obviously due diligence is important; sometimes people really are biased against you and deserve the accusation because of it. It’s certainly plausible that if we do our due diligence that we’d conclude that any particular accusation against someone of being biased against you is a true accusation. But being gay doesn’t make one immune from – or even particularly less susceptible to – the biases pointed out above that everyone has, and so you ought to be especially suspicious – not dismissive, but suspicious – of any accusations of others being biased against you.

          There’s the flipside as well, of course; being gay doesn’t make one immune from biases, so a gay person ought to consciously give the benefit of the doubt to claims of bias against people unlike them (i.e. straight/bisexual/asexual, etc.).

          • TakatoGuil says:

            The points Iain said do still stand, but his synthesis of them is a simplified model that doesn’t fully hold for minority groups. Hence, “not fully”. A straight person being told that someone was “heterophobic” should be especially suspicious of that claim; even the most liberal surveys of sexuality find that 90% of the population (and I assume it’s more) is straight in the first place and it’s unlikely any of them would be so self-loathing without a severe and unusual pathology. We don’t even have to start considering what percentage of homosexuals hate you guys (and I’m pretty sure it’s low) before the claim “X hates straight people” is extraordinary.

            On the other hand, the Harris Poll found a couple years ago that when asked questions like “How would you feel seeing a gay coworker’s wedding photo?” or “How would you feel learning your child’s school teacher was gay?”, at least a quarter of Americans answer that they’d feel “somewhat” or “very uncomfortable”. I feel like it’s pretty fair to say that such feelings are “bias against me”, and since I am American and rarely interact with foreigners, it’s practical to discount what other places are up to in terms of local acceptance (this helps your argument more than mine; the really scary people with bias against me are mostly in Africa or the Middle East).

            So, statistics say that for the random American, there is a 25% chance that they’re biased against me. Straight white folks don’t have to worry about that shit; there’s enough minorities for 25% of the country to hate white people, but not enough of them do. Dudes don’t have to worry about that shit; the stereotypical misandrist does not comprise 50% of the female gender. I do.

            If someone said to me, “TakatoGuil, that generic American over there is a homophobe,” the odds of that statement being true are the odds of flipping heads on two coins. That is not a statement to be especially suspicious of. Reasonably, sure; if I went around screaming that everyone was a homophobe I’d be wrong a lot. But especial suspicion should be reserved for extraordinary claims. Homophobes are twice as common as black people! If I was especially suspicious of all the bias against me I’d never get anything done.

            The nutshell guide to social justice is a laudable ideal, but it doesn’t measure up to reality. Hopefully the country will be more tolerable in the future and homophobes will become uncommon enough that especial suspicion is merited. That’s not how it is right now.

          • lvlln says:

            @TakatoGuil

            But what you’ve done with your post is made an argument “that flatters your side, or that accuse people of bias against you” which, regardless of the details and truth of the argument, you should be especially suspicious of, because of that bullet point 1: “Everyone is naturally inclined to be biased in favour of their own side.”

            Look, if you want to make the argument that a gay person believing that someone else is being biased against them is more likely to be correct than a straight person believing the same thing, I think you can and have made a very strong argument proving that case. But regardless of how strong or convincing or even unquestionably correct that argument is, that doesn’t free you from your biases of being naturally inclined to be in favor of your own side. Thus it still follows that you should be especially suspicious of such an argument and that you should be especially suspicious that if you accuse someone of being biased against you, that you might be falling prey to your own biases.

            There’s no particular reason that a claim has to be extraordinary for one to be especially suspicious of it. The only reason to be especially suspicious of a claim is if it’s a claim that is in favor of your own side. Because merely from the fact that you are biased in your own favor like everyone else, you are likely to subconsciously allow more leeway to claims that favor yourself. Again, that doesn’t mean dismissing it outright, but applying extra scrutiny, as a way to combat your own biases. And sometimes, your biases lead you to the correct answer! But it’s only through extra suspicion that you can actually land on that conclusion.

          • Simon_Jester says:

            @lvlln

            But regardless of how strong or convincing or even unquestionably correct that argument is, that doesn’t free you from your biases of being naturally inclined to be in favor of your own side. Thus it still follows that you should be especially suspicious of such an argument and that you should be especially suspicious that if you accuse someone of being biased against you, that you might be falling prey to your own biases.

            There’s no particular reason that a claim has to be extraordinary for one to be especially suspicious of it. The only reason to be especially suspicious of a claim is if it’s a claim that is in favor of your own side.

            The bolded passages are subject to a pretty harsh cost-benefit analysis.

            Basically, the cost of a false positive (assuming someone acted negatively out of bias) can be much lower than the cost of a false negative (assuming someone is reasonable and free of discriminatory bias, when in truth they are not).

            Homophobes sometimes go wildly against their own enlightened self-interest to do things to a gay person, purely because that person is gay. The cost of trusting a homophobe (say, by revealing that one is gay) can be disproportionately high- one can become the victim of crimes, or lose valuable opportunities, that way. Therefore, if you’re gay, deliberately weakening your homophobia detector and asking “but am I sure this guy is acting out of homophobia” can be a very poor choice indeed.

            “Rational action should lead to winning,” in other words. The rational course of action for a gay person faced with potential homophobes is to adopt precisely the level of homophobo-phobia that provides an optimum balance of “able to trust allies” and “avoids being harmed by enemies.” If that means some extra false positives and incorrectly identifying that jerk who tries to hurt you as a homophobe when in fact he’s just a jerk… Well, that’s far from the worst problem to have.

            It can be counterproductive to tell people they’re worrying too much, when the defensive responses built into their worrying are an important part of preserving their survival.

        • Cliff says:

          Do you realize this is counter to the argument you are ostensibly supporting?

      • broblawsky says:

        I’d argue that “straight white cis male tech-workers” are, on average, more likely to be flattered by material from the broader culture (as opposed to the content of one’s echo chamber), and ergo, are more in need of this kind of advice.

        • Cliff says:

          Why would you argue that? Because they take constant flak from MSM? Aren’t they way more likely to be favorably inclined towards SJW than average? They may be more favorably inclined than minorities since white leftists are far more SJW than minorities IIRC

        • lvlln says:

          I’d be curious to read that argument.

          But also, the “ergo” seems misplaced. Even if one were more likely to be flattered by material from the broader culture, it’s not as if people on average just consume “broader culture.” There are echo chambers as you point out, but also just lots of other types of niche cultures, which often make up the vast majority of one’s exposure to other people and media and such. So what sort of flattery one gets from the “broader culture” doesn’t really tell us much about what the flattery they get overall. It’s entirely possible that one is flattered by “broader culture” but also so beaten down by the niche cultures that dominates one’s life that one needs this advice less than someone who is less flattered by “broader culture” but highly flattered by the niche cultures that dominates that person’s life.

          • broblawsky says:

            I believe that people should consume media from the broader media, though. Echo chambers are no place for smart people to live. As for ‘niche cultures’, I don’t think there’s a good way to distinguish those from echo chambers. People naturally tend to gravitate to sources of media that stroke their ego and make them feel better about the decisions they’ve already made.

          • albatross11 says:

            +1

            It’s worthwhile to find high-quality thinkers to read/understand, and high-quality news sources to read/understand, who don’t share a lot of your beliefs. Not only does this help you avoid an echo chamber, it also makes you a lot harder to spin. The tropes that are used for signaling to Wall Street Journal readers that X is the bad guy and Y is the good guy aren’t usually the same as the ones used in NPR’s reporting. Seeing the same “bad guys” painted with the bad guy tropes from two different media sources can be pretty enlightening.

        • Aapje says:

          @broblawsky

          Yet white liberals are the only group with a negative ingroup bias and straight white cis male tech-workers seem to be overwhelmingly liberal.

          • LadyJane says:

            If you believe that any group has a “negative ingroup bias,” then it’s overwhelmingly likely that you’re strongly mistaken about that group’s beliefs and motivations.

          • Aapje says:

            @LadyJane

            Fair enough, I should have been more specific: they have negative bias to their own ethnic group.

            I don’t think that they actually have negative bias to white liberals, but they will favor black people over white people, all else being equal.

            Anyway, my point was that white people don’t seem particularly flattered by the media they consume. Or men. And perhaps tech workers.

            The idea that straight white cis male tech-workers get flattered by broader culture to an extent that they are blind to their biases seems absurd to me, given that I constantly see white men being chastised in liberal media for having biases; much more than black people or women.

          • broblawsky says:

            Yet white liberals are the only group with a negative ingroup bias and straight white cis male tech-workers seem to be overwhelmingly liberal.

            Do you have evidence for that from the literature?
            Speaking anecdotally, my experience has always been that white men with technical degrees (a group to which I belong) have an inflated sense of their own ability, even in fields outside of their expertise.

            The idea that straight white cis male tech-workers get flattered by broader culture to an extent that they are blind to their biases seems absurd to me, given that I constantly see white men being chastised in liberal media for having biases; much more than black people or women.

            In the spirit of the OP, I must ask: are you confident that your appraisal of the broader culture is accurate, and that it hasn’t been influenced by your consumption of a specific brand of critique?

          • LadyJane says:

            Fair enough, I should have been more specific: they have negative bias to their own ethnic group.

            I don’t think that they actually have negative bias to white liberals, but they will favor black people over white people, all else being equal.

            I doubt this supposed bias actually “cashes out,” in terms of their actions, in any meaningful way.

            It seems like you’re focusing on a specific type of rhetoric, misunderstanding the nature of that rhetoric (mistake #1), overestimating the prevalence of that rhetoric and its importance to the overall worldview of the people in question (mistake #2), and assuming that they actually live up to that rhetoric in practice (mistake #3). The sum total of these mistakes is that your mental model of the people in question is almost totally detached from how they actually think and behave.

          • Speaking anecdotally, my experience has always been that white men with technical degrees (a group to which I belong) have an inflated sense of their own ability, even in fields outside of their expertise.

            So, in my experience, do most other people.

            I tend to notice the pattern when people pontificated about economics. Here as well as elsewhere. And climate arguments online are full of people, on all sides, who express confident opinions on subjects well outside their expertise.

          • broblawsky says:

            So, in my experience, do most other people.

            I tend to notice the pattern when people pontificated about economics. Here as well as elsewhere. And climate arguments online are full of people, on all sides, who express confident opinions on subjects well outside their expertise.

            Noticing that pattern is classic Gell-Mann Amnesia, right?

          • Clutzy says:

            Noticing that pattern is classic Gell-Mann Amnesia, right?

            No. Failure to notice is.

          • Aapje says:

            @broblawsky

            The graph is in this page.

            are you confident that your appraisal of the broader culture is accurate, and that it hasn’t been influenced by your consumption of a specific brand of critique?

            I consume all kinds of critique, so…

          • Aapje says:

            @LadyJane

            I recognize that this kind of bias/rhetoric/whatever doesn’t actually result in a bias against white people in every context, just like the South was fine with black people picking cotton.

            I also recognize that people often choose to comply with their ideology in ways that benefit (or don’t harm) them personally.

  19. AlphaGamma says:

    Perhaps related is the argument I often see that anyone who is knowledgeable about an issue is biased, because if they weren’t biased they wouldn’t have made the effort to acquire that knowledge.

    See, for instance, the argument that “only gun nuts” know or care about the difference between an assault rifle and a semi-automatic rifle.

    Or for perhaps a less CW-y example, the British argument that anyone who puts forward an argument in favour of HS2 (a planned high-speed train line) must be a ”trainspotter” and therefore biased because they think fast trains are cool and therefore a thing we should spend money on.

    • Garrett says:

      Both of these are examples of rhetoric winning out over logic. Replace “trainspotter” with “transportation expert” and suddenly the whole tone of the argument substantially shifts, despite “transportation expert” being a difficult term to pin down. Can you imagine arguing in a snide tone of voice that “you only like high-speed rail because you’re a transportation expert”? At-best the argument has shifted from the desirability of the person to the bias of the person.

    • Cory Giles says:

      I actually would generally agree with this argument, but it depends on what type of bias is being alleged. It is absolutely correct to say that I, as an aging researcher, am biased towards the proposition that “aging research is important and valuable and needs more funding”. Same with my colleagues, and replace “aging” with any “X” and it would be true.

      If the allegation is that I, as an aging researcher, am biased in favor of theory A over B, then you would think the answer would be generally no, but it isn’t that simple. I would be biased in favor of theories coming from, and accepted by, other aging researchers within my respectable academic community and against theories from others outside it. I would be biased in favor of theories that support my previous publications, or those which, if true, would help me get funded.

      Scott really already addressed this issue by pointing out that the more potentially biased party, with a larger stake in the outcome, would generally also be the more knowledgeable one. The argument you raise is just the reverse of that, that the more knowledgeable, the more likely to be biased. Since the two are either correlated or uncorrelated, this proposition has to be true in both directions or neither direction. I think it is true in both directions.

    • anonymousskimmer says:

      That relative (who may be a parent) who talks your ear off about their interest.

  20. Rick Hull says:

    Given Yud’s start with rationalist blogging at overcomingbias.com, it would be nice to hear from him or Robin Hanson on this post.

    • Jayson Virissimo says:

      Yes, it would be nice to hear from them on this (or just about any post on this blog), but if the suggestion is that they somehow didn’t anticipate this potential failure mode of bias-talk, well…they did. Their project was about debiasing yourself, not at all about pointing out the biases of others while engaging in debate with them.

      Yudkowsky, in particular, liked to point out that knowing about biases can hurt people.

  21. bagel says:

    Well of course people who are pro-Israel or pro-Palestinian would think that a video trying to be “neutral” is biased; both of those camps think that “neutral” is factually incorrect.

    And just to make it all more fun, the multiple definitions of bias support digging in and never finding common ground. There’s bias as in divergence from facts, but also bias as in divergence from naive evenhandedness. A video trying to be “neutral” (no bias-from-even) may (in their eyes) necessarily be biased-from-facts. And (without delving into any particular issue) on some issues the facts are not even; you’re always biased away from evenness or facts and you may have to choose.

    • Jiro says:

      I think the CS Lewis quote above here is appropriate, only instead of applying it to the biased parties, applying it to us making meta-decisions about bias. You need to first decide whether the video is really neutral. Only after you have done this does it make sense to say “well, both the pro-Israeli and the pro-Palestinian think the video isn’t neutral, they’re obviously both biased”. There’s no substitute for actually figuring out the answer first.

  22. gbdub says:

    I think the relevant question in any given instance is this: “By asserting bias, are you adding context to the argument, or seeking to shut it down?”

    “You’re biased” is potentially useful context for an argument, but used as an argument itself, it’s just an ad hominem.

    “You’re an X, have you considered how being a Y would alter your perspective?” is good. “You’re an X, so your opinion on this is invalid” is bad.

  23. janrandom says:

    Pretty amazing to see totally different people making comparable observations and proposals about today’s bias/bigotry world! I’m referring to Scott Adams’ Periscope on this. You may or may not like him, but his proposal to accept that we are all biased aka bigoted and move on seems to be pretty close. Here is the twitter that led me to it:
    https://twitter.com/RealJeffTidwell/status/1151540548655861760

    Here the YouTube (minutes 26:26 to 35:38, you probably don’t watch the rest of it):
    https://youtu.be/ppPshRMgQVY

    I considered linking this post on Scott Adams’ Twitter but think that might bring too many conservative people here.

    • Adama says:

      I considered linking this post on Scott Adams’ Twitter but think that might bring too many conservative people here.

      Clever.

    • Jiro says:

      accept that we are all biased aka bigoted and move on

      You shouldn’t accept that you are bigoted (or biased aka bigoted). “Bigoted”, and even “biased” to some extent, is used as a motte/bailey. Admitting to the motte will get you treated as though you’ve admitted to the bailey. This will not go well.

    • Eponymous says:

      I strongly disagree with Scott Adams’ fatalism on this, though. It’s one thing to (correctly) observe the universal flaws in human reasoning processes. It’s quite another to shrug one’s shoulders about it, or (worse) advocate learning how to manipulate them, and glorify those who do. Demons are to be fought, not worshiped or manipulated (good luck).

      If you have the mentality that *getting things right matters*, then you can’t do that. And it *does* matter, at least in some areas. Engineering (the bridge stands or it falls; the rocket reaches the moon or it doesn’t). War. Other areas too.

      Not in politics, obviously.

  24. Eponymous says:

    In my view, “everyone is biased all the time” is a reasonable description of most people’s epistemic level. I aspire to do better than that.

    My pet theory is that we actually have access to a superabundance of information on most questions — more than enough for a smart-human level unbiased Bayesian reasoner to have high confidence on most factual disagreements in our society. We fail to agree mainly because of bias.

    I agree that arguments based on “bias” are generically terrible as persuasion. But more generally, I view arguments as a terrible way to change people’s minds in general. So I’m not really interested in getting better at winning arguments — I’m much more concerned with how to get things right in the first place. And here I’m very much interested in understanding the biases that exist in my information sources, and in myself.

    • But more generally, I view arguments as a terrible way to change people’s minds in general.

      One of my father’s doctrines was that the purpose of an argument was not to persuade someone but to give him the ideas with which he might later persuade himself.

      • Conrad Honcho says:

        That is very wise. I’ve thought of it as “you can’t tell anyone anything. You have to make them think they came up with the idea themselves.”

      • Eponymous says:

        When I do “argue” with someone, I try to do it one-on-one, without an audience, and I try to get them to see things marginally more from my perspective. Whether the word “argument” still applies is questionable; I prefer not to use it at all.

        My view is that most public debate is more about theater and persuading the audience (or ‘scoring points’) than persuading the other side anyway.

        I once heard a primatologist (probably Frans de Waal) say that conversation among humans is the equivalent of grooming behavior — strengthening social bonds between primates. If so, then I think that argument is the equivalent of dominance contests, from displays to physical fights: partly performative, partly directed at personal dominance. I think this definitely applies to online debates; and knowing this, I try to avoid it. Groom, not dominate.

        I think that healthy debate — even public debate — is possible, though it’s vanishingly rare in politics. One can find it in certain venues, such as academic seminar rooms, though it’s certainly not universal even there.

        Probably the first requirement is good faith desire to find the truth on both sides; followed by willingness to set aside ego.

    • Corey says:

      My pet theory is that a superabundance of misinformation is inextricably tied to the superabundance of information. So when, say, a question gains political valence, it becomes impossible for someone without pre-existing expertise in that area to find out the truth.

      • Eponymous says:

        This is a useful and correct observation, though it’s actually consistent with my theory, which is about the set of information available to humanity in general (so to speak). The actual task facing a particular individual human trying to figure out the facts on a particular topic can be quite difficult given the biased-information industry.

        Though I do think that, on most topics, a hypothetical smart-human level unbiased thinker could still reach high confidence on most questions we struggle with. (Meaning factual questions, of course.)

  25. Deiseach says:

    2. A man and a woman are debating abortion.

    Who is more biased? A or B?

    This is a tough question.

    (F)or example, a woman may have hard-to-communicate information about what makes abortion rights feel important to her.

    Indeed it is a tough and very tough question, to the point that bias creeps in where in (for example) talking about abortion always assumes the man is against it and the woman for it 🙂

    I’ve had, and witnessed, my share of online arguments where the guy is passionate about “abortion is a human right!” and the woman/women are the “no, I think abortion is morally wrong”. I’ve mentioned before my dislike of the kneejerk “women’n’minorities” political messaging where it treats all women as a monolithic group and that there can’t possibly be any women who are on side A rather than side B of the question (or vice versa).

    • Rebecca Friedman says:

      I was tempted to leave a comment along these lines, but you said it better. Thank you.

    • Saint Fiasco says:

      I was going to say the same thing, though I attribute my experience to my Catholic background.

      Maybe in places where Catholicism is not dominant, women really do support abortion more than men.

    • eyeballfrog says:

      Indeed, polls consistently show that support for abortion does not depend on gender. That doesn’t stop the narrative being about men trying to control women’s bodies, but at least it gives an easy way to spot those who don’t care about facts in the discussion.

      • Eponymous says:

        Huh, I find this surprising. Based on my social circle, my impression was a fairly strong M/F difference.

        • aristides says:

          I suspect This is countered by the population that goes to church weekly, forgive me for assuming that describes your social circle. I once had a social circle that went to church weekly, and in general the women were more pro life than the men. Many men would say they personally are against abortion and would never pressure someone to have an abortion, but were uncomfortable with the state telling women what they couldn’t do with their bodies. The median women however, were very vocal about fighting abortion and overturning Roe v. Wade. This was compounded with women making up 60% of all church goers. I wouldn’t say it’s a huge split, but probably enough to balance out the country based on the data.

        • edmundgennings says:

          I have noticed that there is a M/F difference in how pro life people are prolife that is hard to pin down exactly. It follows the M/F differences in style of political thought but seems more pronounced and there is more going on than just that. It can be approximated as pro-life women are more pro life than pro-life men but it is much more complicated than that.

  26. Nietzsche says:

    It seems to me that some of the issue is the divide between (1) people who want the truth, and (2) people who want to be right. There’s a lot more of group (2). If you want the truth, then you take biases seriously, and won’t just dismiss out of hand the idea that your position on X could be the result of bias. But in a lot of ways that’s just motivation to examine the actual argument and reasoning behind your view. Love the electoral college? Maybe that’s just status quo bias! OK, maybe it is. Let’s dig into the actual pro and con reasoning about the electoral college. If you only care about being right, then accusations of bias are just one more weapon to bring into battle.

  27. sclmlw says:

    Calling someone out for ‘bias’ in these non-productive ways is no different from using a slur. There are established slurs and slurs you make up on the spot, but they’re all meant to be accusatory and to dismiss the target of the slur. They are not meant to introduce additional information to the discussion, and their use has nothing to do with correcting biases.

    I think what Scott is trying to do here is to separate out the use of bias correction from the use of slurs, but without the vocabulary at hand to name them as two separate phenomena. They have been lumped together under the same category to the extent they’re mistakenly identified as the same thing.

    In context of yesterday’s post about lie inflation, this is a demonstration of how inaccurately inflating a definition can make rational discourse more difficult.

  28. eqdw says:

    Epistemic status: exploratory

    This is called the hostile media effect, though it’s broader than just the media. I’ve talked about it before in against bravery debates. My favorite example is conservatives complaining that the media condemns far-right terrorism but excuses Islamic terrorism (eg 1, 2, 3, 4, 5) alongside liberals complaining that the media condemns Islamic terrorism but excuses far-right terrorism (eg 1, 2, 3, 4, 5).

    Or if you prefer facts to anecdotes: according to a Gallup poll, conservatives are more likely to believe the news has a liberal bias; liberals are more likely to believe the news has a conservative bias. In a study where experimenters showed partisans a trying-to-be-neutral video on the Israel-Palestine conflict, the pro-Israel people said the video was biased toward Palestine, and the pro-Palestine people said the video was biased towards Israel.

    I think there might be an explanation for this particular phenomenon that does not involve bias. It’s sort of related to Gell-Mann Amnesia.

    Let’s say you’re a skoratodist (I hope that isn’t a real word for something). You have a set of preferences, politics, and philosophies that are diametrically opposed to the veronifians. You eagerly tune in to the news every night, looking for both skoratodist and veronifian news coverage.

    Now the news, there might be a few sympathizers for your worldview at the news agency, and there might be a few detractors, but for the most part these are people who are neutral with respect to your political struggle. They just want to report the news.

    Now, because the news is full of non-skoratodists, when they cover skoratodism, they’re going to make mistakes. They’re going to make some mistakes that are laughably wrong. They’re going to make some mistakes that look like intentional hostility. And each and every time, you’re going to notice these mistakes, and you’re going to say “hey, why is the media so biased against skoratodists”.

    And just the same, every time they they cover veronifianism, they’re going to make mistakes. Some mistakes will be laughably wrong. Some mistakes will look like intentional hostility. But because you are not a veronifian, you will not recognize these as mistakes, you will just think that’s what veronifianism is.

    From your point of view, it will look like whenever the news covers your group, they get it maliciously wrong, and whenever the news covers your outrgroup, they are a lot more neutral. You will conclude that they’re biased against you.

    At the same time, from the point of view of a veronifian, they will conclude the exact same thing but with polarity reversed. They will conclude that the news is biased against them.

    In actual fact, the news isn’t biased against anyone. The news is just stupid. And each group isn’t biased to see the media as uniquely negative towards them. They’re just incapable of seeing when the news is negative towards other people, and perceive a unique hostility

    • Erl137 says:

      I think this is a really good idea. And I think additionally (relatedly?), the “objective voice” is actually a pretty alienated one, and hearing someone else speak in an alienated way about the social world in which you live is an uncomfortable experience.

      The easy examples to find of this are those “what if we reported on the US like a third-world country?” articles. (see, e.g.,: https://www.vox.com/2014/8/15/6005587/ferguson-satire-another-country-russia-china)

      Usually, reading these sorts of articles is pretty unsettling. Some of that is the hostile language, but a lot of that is just clinical language used to describe the familiar. I wonder if such an effect might contribute to what you’re describing above.

      So suppose now skoratodism is a religion. The nightly news says “Flergalmar, a charismatic leader who’s seized control of Reformed Orthodox Skortodism, called today for ‘vigilant resistance’ to the principles of verofinia.” And you, a skoratodist, are watching this and thinking, “Flergalmar isn’t a ‘charismatic leader’ who’s ‘seized control’, he’s the faithful and dedicated bishop elected in the last synod! And he didn’t ‘call’ for anything; he just gave an Equiskorat-day sermon, people always say stuff like that in a sermon like that, and besides, it was two hours long!”

      None of the sentence is wrong, but the experience is hostile.

  29. nyc says:

    This makes a strong case against leveling bias claims on individuals, but the case against institutions is weaker. For example:

    > hidebound Luddite-ism

    It turns out there is a widespread resentment in the traditional news media of tech companies, because of the way that tech platforms and the internet in general have objectively devastated their traditional revenue sources by supplying aggressive competition for user attention and advertising dollars. (Tech companies have also taken a lot of flack from the same corporations for movie and music piracy, though the major tech companies have little to do with that, make negligible or negative profit from it, and the harmful effect of piracy on traditional media is much less certain than the objective precipitous drop in their advertising revenues.)

    Describing this antipathy as “hidebound Luddite-ism” is, despite being something you supposedly just made up, a fairly accurate description of the situation.

    Similarly:

    > conservatives are more likely to believe the news has a liberal bias; liberals are more likely to believe the news has a conservative bias

    Assuming both types of news are present in the marketplace, this result is unsurprising, because partisans will regard their own sources as neutral while the opponents as biased, leading to a conclusion of an overall bias in favor of the opposition.

    Yet if a conservative claims that MSNBC has a liberal bias and a liberal claims that Fox News has a conservative bias, that doesn’t imply that they’re objectively both actually neutral. It’s more likely that they’re objectively both actually biased.

    And that has implications for how much personal effort you need to spend on independent investigation before you should believe anything they have to say, or the utility in spending your time listening to them to begin with.

  30. dickwheybrew says:

    I would add an important exception here is when a new kind of bias is revealed that people are likely to be unaware that they may possess. e.g. The Women are Wonderful Effect.

  31. J Mann says:

    I’d say that if you’re inclined to accept an argument from authority (and all of us do, sometimes), a bias argument may be relevant to how much weight you want to give someone else’s opinion.

    At that time, it can be helpful to move to an argument from facts.

  32. Icedcoffee says:

    I think the critical point about biases is not whether they exist generally, but whether (and to what degree) they exist in institutions that are not supposed to have bias (judiciary, news, science, referees, etc.).

    E.g. my background is in law. We recognize that biases are powerful tools to harness, and use an adversarial system to help get to the truth. In that system, biases on the part of the lawyers are not just expected, they are encouraged. But a bias on the part of the judge is a completely different story. If the judge is biased, the whole system fails.

    We do a lot of decision-making in this way: multiple biased advocates argue in front of a (presumably) unbiased adjudicator. Telling people who are supposed to be unbiased that they might actually be biased is socially useful, and people in these roles typically have (or should have) structures in place to force them to do so. (e.g. judicial recusal; statements of potential conflicts of interest; etc.)

  33. kybernetikos says:

    We call someones ‘bias’ that we agree with a ‘prior’.

  34. Douglas Knight says:

    My favorite example is conservatives complaining that the media condemns far-right terrorism but excuses Islamic terrorism (eg 1, 2, 3, 4, 5) alongside liberals complaining that the media condemns Islamic terrorism but excuses far-right terrorism (eg 1, 2, 3, 4, 5).

    You’ve been sitting on this draft awhile, haven’t you? I guess since you were writing about communism. Those links have rotted. You should replace broken ones with archive links: Conservative 1; Conservative 5; Liberal 1; Liberal 5.

    (Not many of those links are about terrorism and the ones that are mainly aren’t complaining about it being given a pass. I guess two of the liberal articles (one plagiarizing the other) complain about Breivik not being labeled a “terrorist,” but they’re mainly complaining that he was labeled “right-wing” rather than “Christian.”)

  35. chaotickgood says:

    I just want to say that I am a communist and I really like both this blog and its author.

  36. Harkonnendog says:

    In a study where experimenters showed partisans a trying-to-be-neutral video on the Israel-Palestine conflict, the pro-Israel people said the video was biased toward Palestine, and the pro-Palestine people said the video was biased towards Israel.

    I assume those reports were terrible and people who viewed them latched on to factual errors and the other results of incompetence and assumed they were the result of bias rather than incompetence. Thus the study acted as a force multiplier on the viewers’ biases.
    So really that study shows the media is evil because it encourages people to believe their biases are true, even if their biases are in opposition.
    This conclusion is not a result of my having a bias towards believing the media is biased, which is not a bias, since the media is incompetent. Thank you.

  37. Hoopyfreud says:

    Between this post and the last, I feel like there’s something very weird happening with language here.

    Regarding point 1,

    This ties into the problem where you can just make up a bias, like “hidebound Luddite-ism”. Technophiles will see an anti-tech bias everywhere. And whenever they meet a specific anti-tech person, they can assume that their positions have been shaped not by reason, but by the anti-tech sentiments that are omnipresent in our society. Having explained away their opponents’ position as the product of bias, they’ll feel no need to debate it or question whether it might be true.

    The existence of institutional “hidebound Luddism” is a completely independent of the idea that someone can be biased in a “hidebound Luddite” direction. You start off with this and then call them “anti-tech people” as though that’s somehow different. It’s the same! It’s the recognition of an orientation in cognition-space. If you’re more sensitive to news of one man being shot in the US than you are to ongoing Japanese whaling, it’s entirely valid to call your view anthropocentric. This is what we have the word for. That you may have good reasons for your anthropocentrism is immaterial. I don’t expect the whales to care. Or the Syrians, for that matter. Anyone who brings a US murder up in Syria as though it should be a mater of public debate is absolutely going to get called out for bias. Are you suggesting they shouldn’t be?

    As for 3, I don’t think it’s particularly difficult to understand when bias is bad. Bias is bad when it leads to you discounting the impact of your actions on the biased-against group. Bias is bad when your model of other people fails to predict their responses. Bias is bad when you end up substituting “ought” for “is,” or when your [predictions/technologies/systems] underaccommodate a diverse (and yes, I mean meaningfully diverse, where that diversity has a real effect on the impact of the system on people who have particular characteristics) population. If you end up going, “oh, I didn’t think of that,” you’re biased and you’ve gone and done a bad by being biased.

    Now, if you did think of it and still believe that your moral righteousness burns bright enough that you’re entitled to legislate based on your opinion (and sure, this applies to the plumber/teacher just as much to the abortion debate), it’s worth recognizing explicitly that that’s the case. The problem is that “I don’t care” is a pretty hard bullet for most people to bite. They can cry me a river, as far as I’m concerned. It’s good for people to own up to the societal impacts of their preferred policies. “George Bush doesn’t care about black people” and “globalism is destroying the culture and livelihood of honest American workers” are good and compelling arguments that we should listen to. Tradeoffs create victims, and making an argument from the viewpoint of the adversely impacted minority is biased but in no way evil. Justice demands such people be given a voice, and that we own up to the moral calculus we perform. Bias isn’t bad, but the recognition of bias can allow us to dissolve the lies we tell ourselves. I think those lies are what’s evil.

    That actually leads into 4.

    just learning that there’s some possible bias should rarely have much of an effect on this process, especially since with any self-awareness we should probably have already priced all of our own biases in.

    I know I’m really bad at this, especially when it comes to moral questions. Do I understand the character of the suffering I’ve inflicted, or the joy that I’ve brought? I don’t think I do. I am a single point of view floating in a vast sea of experience. The idea that I’ve ever really been able to take an objective view, to effectively price my own biases in, strikes me as very strange, possibly dangerous, and definitely dubious. If you think you’ve got it figured out, I think you’re probably wrong. How are you going to price your biases in, exactly? Worth noting: if you are ever accused of bias and are surprised by it, you haven’t. It honestly sounds like you’re falling victim to the same trap you spotted Eliezer falling into in his book. I think that his strategy of “doggedly focusing on object-level questions from your own point of view” is absolutely awful, for what it’s worth. I think that keeping your eye on the object, but seeking other interpretations (and evaluating them critically) is likely to be much more useful. That’s what bias arguments should be about – the idea that you’re neglecting a perspective, and an impact along with it.

    Finally, 5.

    I think that if you rely on people who are your friends to inform you about perspectives you’re neglecting, you run a huge risk of ending up hill-climbing. The fact that you seek out literature from people you disagree with is a good thing, and possibly the most constructive way to avoid this, but I think you’re making some substantial mistakes by doing this all in the first person. The wide criticism that you’ve gotten from a bunch of people on your reading list – or at least, the bit of it you reviewed – should, I think, be indicative of that.

    • HeelBearCub says:

      I feel like Scott has long wanted certainty in any of his conclusions. The idea of not knowing the truth, precisely, seems something he wishes were possible to avoid. Perhaps a little odd for someone who is coming out of the Bayesian tradition.

      And I’d argue that for quite a long time he seems to have been circling deeper into the bellybutton that is “The Fallacy of Gray”. As he learns more, is exposed to more opinions, he becomes less and less sure of the ability to know anything.

      • Hoopyfreud says:

        As he learns more, is exposed to more opinions, he becomes less and less sure of the ability to know anything.

        I think you might be right, but I’m not sure that’s a bad thing. I feel like I’ve more-or-less come to terms with the idea that my understanding is imperfect, but that I can only proceed through life with confidence, counterbalanced by an open mind.

        • HeelBearCub says:

          I’m not sure that’s a bad thing

          On balance, I think it is a good thing, even a very good thing, to be in the realization that our knowledge is always imperfect.

          But that isn’t what I mean about Scott.

          I more in the mind of this quote, frequently misattributed, from Walter Kotschnig:

          Let us keep our minds open, by all means, as long as that means keeping our sense of perspective and seeking an understanding of the forces which mould the world. But don’t keep your minds so open that your brains fall out! There are still things in this world which are true and things which are false; acts which are right and acts which are wrong, even if there are statesmen who hide their designs under the cloak of high-sounding phrases.

          “Learned Epistemic Helplessness” is letting your brain fall out.

  38. Furslid says:

    There are two ways that bias seems to be brought up in arguments.

    The useful one is when it helps us to reason more clearly. “As a middle class white person, you may be biased in favor of the police because every encounter you had with them was civil and the worst outcome was a speeding ticket you could pay without financial hardship. Remember there are negative encounters as well, and your personal experience is not an accurate guide to how often or how bad they are.” Knowing what the potential bias is could help them get a more accurate view of policing as experienced by minorities or acknowledge that things could be worse than they think.

    The problematic way that bias is brought up is to make reasoning harder/impossible. “As a middle class white person your bias prevents you from understanding how minorities experience policing.” This leads them to be unable to have a reasonable position on the issue. If they decide to have no position, apathy is likely. If there is no way to have a better position, they are likely to just believe what they want to or what serves their interests.

    • nyc says:

      The problem is that both of those ways are close to identical.

      Suppose your middle class white person has actually staked out a nuanced position on the police. The war on drugs, police militarization and civil asset forfeiture are highly problematic, are the root cause of the adversarial nature of police interactions with minorities, and should be done away with straight away. Yet the police are only operating within the frame these policies have created for them, and we still do kind of want them around to be able to investigate murders and such.

      But the twitter version of that discussion goes like this:

      The Black Gentleman from Baltimore: Fuck The Police!
      The White Gentleman from Denver: They’re just doing their job.
      The White Lady from San Francisco: Check your privilege!

      Even if Ms. ‘cisco meant her retort in the first sense, in that context it’s quite indistinguishable from the second sense, and could just as easily be interpreted as an invitation to stop talking before a deplatforming effort is organized.

      It probably also doesn’t help that the second sense is the more common one and so the one more likely to be assumed when there is ambiguity.

      But suppose we forget about bias and instead it went like this:

      The Black Gentleman from Baltimore: Fuck The Police!
      The White Gentleman from Denver: They’re just doing their job.
      The White Lady from San Francisco: {link to instances of police misconduct enabled by drug laws}
      The White Gentleman from Denver: So let’s legalize drugs then. Can’t plant drugs on people or use a drug dog to manufacture probable cause if drugs are legal.

      It seems clear that the second line of discussion is more likely to lead to agreement and productive consequences.

  39. Reasoner says:

    I actually think bias can arise in a very “rational” way.

    Suppose I have some beliefs about the world. The way I evaluate the credibility of different information sources is based on whether they make claims I think are correct. For example, if the Daily News tells me the world is flat, and I believe that is incorrect, then I stop trusting the Daily News as much.

    But this has an insidious effect: The publications I consider most credible are the ones which already agree with me. And by consuming info from them, I become more embedded in that publication’s worldview. With my updated beliefs, a publication with a different worldview looks even more like it is full of incorrect claims and not worth my time or attention.

    It’s not even clear what I’m doing wrong. It seems obviously reasonable for me to trust publications based on whether they say incorrect things. And it seems obviously reasonable for me to learn new things from publications which seem credible. But the end result is, the more reading I do, the more I end up drifting towards the center of some particular belief cluster. Even small differences in the “initial conditions” of my beliefs, or small proclivities to spend more time reading arguments for positions I find reassuring, can have large effects in the long run.

    (For fellow machine learning geeks: The process I described is a bit like the Expectation Maximization algorithm.)

  40. humeanbeingblog says:

    “Bias” is an effective charge, when it is effective, because it serves to indicate that people don’t believe what they are saying because their claims are true, but for some other reason. And sometimes people’s beliefs are explained by the truth, but sometimes they are not. There’s no way to sort out the good case from the bad case in advance, a priori. It’s all a reflection of how we think the world works, which informs what the most plausible explanation of a given element of testimony is. See: https://www.academia.edu/38605329/Explanationism_Provides_the_Best_Explanation_of_the_Epistemic_Significance_of_Peer_Disagreement

  41. Chad Gonczy says:

    The way I use bias considerations is at the tail end of my consideration of the other persons argument. My default assumption when discussing any issue or arguing about a topic with someone is that they are approximately 5% smarter than I am, not necessarily because its true but because I have found it to be a productive way of learning things. Also for personal reasons, my dad was like the dictionary definition of typical mind fallacy growing up, so I have a strong aversion to thinking I know what other people think or that if I dont know it, they dont either. Anyhow, the problem for me comes when I consider another argument, think it through as best I can, and find it to be wrong or wanting. If this person is 5% smarter than me, and they are putting forth an argument I think is wrong, there must be a reason for that. Either I’M wrong (most likely) or they are wrong for some specific reason. People may not be perfectly smart and rational, but they aren’t random, they dont hold randomly wrong beliefs.

    So thats where common biases come in, as a way of explaining away the dissonance I feel about someone who is not an idiot holding a wrong belief. This is in effect just kicking the problem you are talking about in this post down the road a step or two, but at least it does allow for the object-level discussion and consideration to happen up front. It certainly still leaves me open to dismissing them incorrectly based on my own biases, or failing to correctly update.

  42. Jacob says:

    At a certain point, being “biased” is indistinguishable from “having different values”. Consider example 7 in a bizarro universe where people had to tell the truth:

    Me: The KKK hurts people and that’s bad.
    KKK:We’ll only hurt black people and that’s good.
    Me: No actually hurting people is bad regardless of skin color
    [repeat prev 2 lines ad nauseum]

    Notice we aren’t disagreeing about facts.

    The reason debates don’t go like that is because people have a strong incentive to hide their true motivations from other people who might not share those same motivations. This is very obvious in the case of #1; a raise for teachers is almost always going to be better for teachers than not giving them a raise. Presumably plumber doesn’t hold any animosity towards teachers (some of their best friends are teachers!) but what it amounts to is the plumber spending money (at the very least it’s opportunity cost) to gain some benefit which may or may not be worth that spending. For the teacher, the raise amounts to gaining some personal benefit (income) and possibly also some social benefit. It’s a win/win.

    Example 8: You at a car dealership “arguing” with the salesperson over whether you should buy the car, and furthermore the price at which you would buy it.

    Normally we would call this a negotiation, and that’s what some of this boils down to. I don’t know what sort of political pressures King Edward was under which made him consider expelling all the Jews, but it’s definitely different than the consequences facing the Jews. Maybe they can reach a solution in which both are happy. Maybe they can’t. None of this has anything to do with objective morality or global consequentialism, because that’s not how politics works. “Policy X would hurt group Y (which you belong to) but help group Z (which you don’t belong to)” is not a winning argument in public discourse, even if it is correct on a utilitarian basis.

  43. mishapom says:

    The links in section 2 seem to be broken.

  44. Lysander says:

    This is why I take it as a general rule that knowledge about biases and hidden motivation is a tool for me to apply to myself to increase my own understanding of myself, and that is the most effective way to use that knowledge. I don’t consider it my business to guess about other people’s biases and to try to correct them. Even if someone tells me about what they see as my biases, I try to see if they could be right to some degree, rather than trying to refute what they are saying or trying to find reasons why they are biases.

  45. jasongreenlowe says:

    I think part of why “non-Jew” doesn’t feel like a nice, neutral sort of person to evaluate what to do about Israel/Palestine is that if you live in the United States, screening for people who aren’t Jewish doesn’t give you people who don’t have opinions about Jews…if someone has bothered to form an opinion about Israeli politics at all, it’s probably because they either fetishize Jews (ala evangelical Christians who want a strong Jewish state as a useful theological tool to help bring about the Rapture) or because they demonize Jews (ala evangelical leftists who see a strong Jewish state as a useful political tool to help emphasize the way white people are constantly oppressing people of color). Even Americans who aren’t devout Christians or extreme leftists have probably had their views colored by one of these two camps; on average, you’ve spent much more time listening to your Christian/leftist uncle ramble about his thoughts on Israeli politics than you’ve spent, say, reading Foreign Affairs articles about the middle east or looking up middle eastern statistics on Wikipedia.

    You want someone who’s *actually* unbiased around the Israeli-Palestinian conflict? Ask a Mongolian professor of political science, or a journalist from Benin, or a Maori trust council.

    It’s not *easy* to find someone with a high ratio of information to emotional investment, because emotional investment is usually what drives people to acquire information…but it’s not impossible, either.

    • MostlyCredibleHulk says:

      You want someone who’s *actually* unbiased around the Israeli-Palestinian conflict? Ask a Mongolian professor of political science

      It might be OK bias-wise, but you’d have to find a Mongolian professor of political science that has so much interest in the conflict they gave themselves time to read on the topic, become proficient in it to the level that is appropriate for the professor of political science (otherwise you’d be as well asking a Mongolian janitor instead) and all that while carefully avoiding gaining any biased opinion from the selection of sources and alignment with pre-existing biases on other topics that seem similar. I am sure there are some Mongolian professors capable of doing such work, but finding one actually having done all that and ready for you to answer your questions would be an extraordinary luck.

  46. tvt35cwm says:

    TLDR: Always be sceptical, especially of the things that you want to believe.

  47. A bias isn’t a fallacy, and does nothing to invalidate a simple argument or tight syllogism. However, where there’s enough moving parts in a topic, or where the matter is primarily empircal, bias is much more of a problem.

    Bias arises from motivations not reasoning. Generally accusations or even awareness raising of bias isn’t very effective at removing the bias because it doesn’t change the other person’s motivations. What accusations are mostly designed to do is to change the way a third party perceives a participant in a discussion. If this concerns empircal evidence it is possibly valid. If it’s used as a counter to a logical argument, especially a simple syllogism, its quite possibly thinly disguised ad hominem. Recognising the dishonest version and pointing it out is useful.

    If a question has two answers, A & B, there are three basic motivations at play – (1) wanting A to be true, (2) wanting B to be true, and (3) wanting to know *whether* A or B is true.

    In the examples this article lists, we have people that either have mostly (1) or (2) and are biased, or they have none of the three motivations, and are disinterested. The person you really want, is the *unbiased* person who only has loads of (3), or at least has (3) with the closest balance you caa find between (1) and (2).

    Finding, or perhaps creating, that person is a tricky and lengthy process. Because people can’t effectively perform a search for an unbiased person for every topic they hear about, they require a sophisticated insitution / trust network / subculture to deliver that. It needs to habitutate its people in (3), and be good at filtering out people with (1)s and (2). It has to engrain a culture of curiosity and cultivate values based on a universal human identity. I think it also needs to develop resistance against it being compromised by other agendas (politics or money), and to solve the philosophical shortcomings of the theory of objective truth.

    Once upon a time, perhaps academia (and maybe the civil service) was a little bit like this. But now the left is undermining it from within (social agenda and postmodernist attacks on objectivity), and the right are undermining it structurally (corporatisation compromising independence and encouraging self promotion and KPI obsession). It’s all drowned out the softer voice of truth-seeking.

    Until we (re?)establish a voice of that kind that is resistant to both vested interests and virtue-signalling echo chambers, our public discourse, and our ability to solve basic social, political and economic problems, will continue to go down the toilet.

  48. Telomerase says:

    Distinguishing when arguments are just bias is easy… just write an article on immigration and work visas. Then ALL the comments will be based on bias, and you don’t have to use any algorithm at all 😉

    https://www.concordmonitor.com/More-work-visas-27005425

  49. Assistant Village Idiot says:

    I think there is some advantage in understanding bias if one has actually been on another side of an issue. I used to be very liberal, and it had much to do with my mother’s family, my church, and the fashion of the smart kids in my New England city when I was young. I now usually vote with the conservatives, though I don’t always have cultural comfort with them. That does not make me bias-free at all, but it has served to inoculate me from some biases. People who were once X but then lived in Eastern Europe or Thailand or Seattle for a few years and had the scales fall from their eyes have an increased credibility for me.

    Or not. “The heart is deceitful in all things” and I may actually be more arrogant and intransigent because of this. I am also aware that people often claim such knowledge but delude themselves, as in “My opinion about the Catholic Church is correct because I grew up Catholic.”

  50. MostlyCredibleHulk says:

    I think I’m probably biased against communism because many communists I met have been nasty people who tried to hurt me, so I try to solve that by reading more communist books and seeking out good communist arguments wherever I can find them

    I wonder if the author would advocate the same approach, for example, for white supremacists or proponents of Global Jewish Conspiracy theory. I am not sure there are any “good” white supremacist books or arguments, but one can not know for sure it before one really looks, right? So if the proper reaction for communists being evil is to seek out more good communist arguments and reading more communist books, should the same be applied to other groups that appear evil? And if not, what is so special that grants communists the exception?