Confirmation Bias As Misfire Of Normal Bayesian Reasoning

From the subreddit: Humans Are Hardwired To Dismiss Facts That Don’t Fit Their Worldview. Once you get through the preliminary Trump supporter and anti-vaxxer denunciations, it turns out to be an attempt at an evo psych explanation of confirmation bias:

Our ancestors evolved in small groups, where cooperation and persuasion had at least as much to do with reproductive success as holding accurate factual beliefs about the world. Assimilation into one’s tribe required assimilation into the group’s ideological belief system. An instinctive bias in favor of one’s in-group” and its worldview is deeply ingrained in human psychology.

I think the article as a whole makes good points, but I’m increasingly uncertain that confirmation bias can be separated from normal reasoning.

Suppose that one of my friends says she saw a coyote walk by her house in Berkeley. I know there are coyotes in the hills outside Berkeley, so I am not too surprised; I believe her.

Now suppose that same friend says she saw a polar bear walk by her house. I assume she is mistaken, lying, or hallucinating.

Is this confirmation bias? It sure sounds like it. When someone says something that confirms my preexisting beliefs (eg ‘coyotes live in this area, but not polar bears’), I believe it. If that same person provides the same evidence for something that challenges my preexisting beliefs, I reject it. What am I doing differently from an anti-vaxxer who rejects any information that challenges her preexisting beliefs (eg that vaccines cause autism)?

When new evidence challenges our established priors (eg a friend reports a polar bear, but I have a strong prior that there are no polar bears around), we ought to heavily discount the evidence and slightly shift our prior. So I should end up believing that my friend is probably wrong, but I should also be slightly less confident in my assertion that there are no polar bears loose in Berkeley today. This seems sufficient to explain confirmation bias, ie a tendency to stick to what we already believe and reject evidence against it.

The anti-vaxxer is still doing something wrong; she somehow managed to get a very strong prior on a false statement, and isn’t weighing the new evidence heavily enough. But I think it’s important to note that she’s attempting to carry out normal reasoning, and failing, rather than carrying out some special kind of reasoning called “confirmation bias”.

There are some important refinements to make to this model – maybe there’s a special “emotional reasoning” that locks down priors more tightly, and maybe people naturally overweight priors because that was adaptive in the ancestral environment. Maybe after you add these refinements, you end up at exactly the traditional model of confirmation bias (and the one the Fast Company article is using) and my objection becomes kind of pointless.

But not completely pointless. I still think it’s helpful to approach confirmation bias by thinking of it as a normal form of reasoning, and then asking under what conditions it fails.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

115 Responses to Confirmation Bias As Misfire Of Normal Bayesian Reasoning

  1. Riverless Uniform says:

    I wholeheartedly agree, and feel that way toward most cognitive biases.
    I also am very wary of any explanation treating confirmation bias as a unified phenomenon. It covers a very wide range of situations. For instance, very different phenomena that could be described as “confirmation people”.

    – It is almost always easier to integrate expected occurrences than outliers. By “easier”, I mean “requires less efforts / computation time”. To integrate an outlier, you have to actually think about it. Given most people have limited spoons / available-brain-time / etc. There is a natural bias to confirm stuff. A confirmation bias.

    – Real life is not boolean. When there is an efficient black-box model for `A`, there usually is no obvious alternative for `not A`. So “lowering your credence in [X]” is just not practical in lots of cases.

    – Symbolically, for a lot of reasons, it’s much easier to reason about `A => B` than `not B => not A` (you’ll likely wrongly flip a negation or something).
    Fun Test: Make people reason about chains of implications in the form of `A => B`, `C => D`, etc. See how long they take to check propositions. Make people reason about chains of implications involving `not`s, see how quickly they get lost.

    – Real life is adversarial. Regularly, when you get offered evidence against `A`, and it so happens that there is an alternative `B` with an alternative efficient model, it’s usually just people cherry-picking evidence to convert you.
    Fun Test: Look at any political debate (political as in “with stakes determining who gets some power”, it does not need to be an election thing or whatever).

    – Most “models” are less like gear-level predictive models and more like black-box pattern-matching models checking if some decision strategy / authority / group is not too far off from reality. (Most ideologies built defense mechanisms against this tendency, such as “Contradictions are only in your mind, but your heart will know” / “Contradictions are a test of faith” / “People who believe in contradictions are banned”.) But this is actually how a lot of reasoning is done: basic black-box decision strategy with black-box expectations that are mostly checks that everything is normal and that we can safely keep applying the strategy.
    Fun Test: Things like Wason selection task. This totally makes sense in a basic “check that everything is normal” paradigm. Or ask anyone to verify anything.

    – Most people do not reason probabilistically. They also do not want to “make the wrong choice”. And the world is usually shaped on commitments to long-term plans (make a bet between A, B and C, and then commit to it for 10 years).
    This explains the sunk cost fallacy (which is just people not willing to accept that they did not make the choice with highest EV, because it pattern-matches to “having made the wrong choice”). Basically, even if people were willing to update, for external reasons, they could not change their strategy (usually not worth it + very risky to start from scratch), and the result would be that they simply end up depressed about their situation.
    Most people do not feel like they could make other choices, and a lot of them were at least once forcefully put into a new situation, and it leaves scars.
    Sad Test: Ask people why they feel this way, the kind of cruxes there are in those situations. If you have ever discussed with someone who has a pattern of behavior that they themselves describe as bad (or damaging, or self-destructive, etc.), you will get what I mean.

    • mtl1882 says:

      +1 to all you said, especially this:

      – Real life is adversarial. Regularly, when you get offered evidence against `A`, and it so happens that there is an alternative `B` with an alternative efficient model, it’s usually just people cherry-picking evidence to convert you.
      Fun Test: Look at any political debate (political as in “with stakes determining who gets some power”, it does not need to be an election thing or whatever).

      Confirmation bias comes up a lot in discussions of politicized issues, but those are exactly the type of issues in which you’d have reason not to take anything at face value, and the reason people talk or write about them often isn’t to actually exchange useful information to inform real-world choices. Dismissing it as game-playing often makes perfect sense–the problem is that your side is doing the same thing, so there are really no useful data points. If you’ve bought into one side, your priors are probably all messed up to begin with, like with the anti-vaxxer.

      Whereas with concrete issues, like in Scott’s example, that there are no polar bears around is a solid prior. Also, it plays out differently than if the neighbor claimed to have found out that vaccines were poison, where many people would say “oh she’s reading nonsense online…of course vaccines are safe,” because they understand such claims are connected to a larger public adversarial discussion. Since there is no such discussion going on about polar bears in the neighborhood, the response is to assume there is some concrete cause for her behavior that you can reason about. “Is she having a mental health problem? nah, she probably saw something else and worked herself up into a frenzy.” Unlike with a political example, you wouldn’t bother to say to yourself, “of course my neighborhood is safe from polar bears,” because you don’t assume that the intent may have been to undermine that worldview.

      Confirmation bias looks so scary because we highlight situations in which people are already detached from reality, so things go bad very quickly and also implicate opposing beliefs. But a lot of the time those beliefs are more like stories and don’t affect real world actions, or people’s priors would shift closer to the real world. In more routine situations, it seems fine, though of course it can still go really wrong, because exceptions happen. But we have to take some things for granted to function.

      • Aapje says:

        The adversarial nature of politics also means that it makes sense to strongly favor your tribe’s politics, because you lose if you stand alone.

        Whereas with concrete issues, like in Scott’s example, that there are no polar bears around is a solid prior.

        Unless you are Biden.

  2. Kaj Sotala says:

    See also Mercier & Sperber 2011 on confirmation bias:

    … an absence of reasoning is to be expected when people already hold some belief on the basis of perception, memory, or intuitive inference, and do not have to argue for it. Say, I believe that my keys are in my trousers because that is where I remember putting them. Time has passed, and they could now be in my jacket, for example. However, unless I have some positive reason to think otherwise, I just assume that they are still in my trousers, and I don’t even make the inference (which, if I am right, would be valid) that they are not in my jacket or any of the other places where, in principle, they might be. In such cases, people typically draw positive rather than negative inferences from their previous beliefs. These positive inferences are generally more relevant to testing these beliefs. For instance, I am more likely to get conclusive evidence that I was right or wrong by looking for my keys in my trousers rather than in my jacket (even if they turn out not to be in my jacket, I might still be wrong in thinking that they are in my trousers). We spontaneously derive positive consequences from our intuitive beliefs. This is just a trusting use of our beliefs, not a confirmation bias (see Klayman & Ha 1987). […]

    One of the areas in which the confirmation bias has been most thoroughly studied is that of hypothesis testing, often using Wason’s rule discovery task (Wason 1960). In this task, participants are told that the experimenter has in mind a rule for generating number triples and that they have to discover it. The experimenter starts by giving participants a triple that conforms to the rule (2, 4, 6). Participants can then think of a hypothesis about the rule and test it by proposing a triple of their own choice. The experimenter says whether or not this triple conforms to the rule. Participants can repeat the procedure until they feel ready to put forward their hypothesis about the rule. The experimenter tells them whether or not their hypothesis is true. If it is not, they can try again or give up.

    Participants overwhelmingly propose triples that fit with the hypothesis they have in mind. For instance, if a participant has formed the hypothesis “three even numbers in ascending order,” she might try 8, 10, 12. As argued by Klayman and Ha (1987), such an answer corresponds to a “positive test strategy” of a type that would be quite effective in most cases. This strategy is not adopted in a reflective manner, but is rather, we suggest, the intuitive way to exploit one’s intuitive hypotheses, as when we check that our keys are where we believe we left them as opposed to checking that they are not where it follows from our belief that they should not be. What we see here, then, is a sound heuristic rather than a bias.

    This heuristic misleads participants in this case only because of some very peculiar (and expressly designed) features of the task. What is really striking is the failure of attempts to get participants to reason in order to correct their ineffective approach. It has been shown that, even when instructed to try to falsify the hypotheses they generate, fewer than one participant in ten is able to do so (Poletiek 1996; Tweney et al. 1980). Since the hypotheses are generated by the participants themselves, this is what we should expect in the current framework: The situation is not an argumentative one and does not activate reasoning. However, if a hypothesis is presented as coming from someone else, it seems that more participants will try to falsify it and will give it up much more readily in favor of another hypothesis (Cowley & Byrne 2005). The same applies if the hypothesis is generated by a minority member in a group setting (Butera et al. 1992). Thus, falsification is accessible provided that the situation encourages participants to argue against a hypothesis that is not their own. […]

    When one is alone or with people who hold similar views, one’s arguments will not be critically evaluated. This is when the confirmation bias is most likely to lead to poor outcomes. However, when reasoning is used in a more felicitous context – that is, in arguments among people who disagree but have a common interest in the truth – the confirmation bias contributes to an efficient form of division of cognitive labor.

    When a group has to solve a problem, it is much more efficient if each individual looks mostly for arguments supporting a given solution. They can then present these arguments to the group, to be tested by the other members. This method will work as long as people can be swayed by good arguments, and the results reviewed in section 2 show that this is generally the case. This joint dialogic approach is much more efficient than one where each individual on his or her own has to examine all possible solutions carefully. The advantages of the confirmation bias are even more obvious given that each participant in a discussion is often in a better position to look for arguments in favor of his or her favored solution (situations of asymmetrical information). So group discussions provide a much more efficient way of holding the confirmation bias in check. By contrast, the teaching of critical thinking skills, which is supposed to help us overcome the bias on a purely individual basis, does not seem to yield very good results (Ritchart & Perkins 2005; Willingham 2008).

    • T. Clark says:

      Thank you for adding specificity to what “confirmation bias” means, Kaj Sotala. My sense is that many people use “confirmation bias” as a general stand-in for a broader set of biases that lead to under-reactions to new evidence, but the technical meaning is more specific, and relates to the tendency to seek confirming rather than disconfirming evidence. I am going to quote from an article I wrote with my friend Michael (“Altruism, Righteousness and Myopia”) where we tried to summarize this.

      “An epistemic double standard consists of both (cognitive) confirmation bias and (affective) disconfirmation bias. Confirmation bias occurs when individuals seek out confirming evidence for a proposition — a relatively easy test for a proposition to pass — and then stop, rather than seeking out disconfirming evidence as well, the harder test. One class of propositions fails the easy test, a second class can pass the easy test but fails the hard test, and a third class can pass both tests.

      “Notably, confirmation bias comes from two non-motivated errors in the way we test hypotheses (Kunda and Nisbett 1986; Kunda 1987). First, we tend to frame matters by asking ourselves whether an agreeable question is true (Miller and Ross 1975): for example, “Am I intelligent and well-meaning?” Second, we systematically tend to seek confirmation rather than disconfirmation (Klayman and Ha 1987) … When we encounter an agreeable proposition in conversation with others, the standard is “can I believe this?” When we encounter a disagreeable proposition, the standard becomes “must I believe this?” [See Tetlock 2005 on this.]

      “If confirmation bias consists of lowering the bar that privileged propositions must clear (and raising the bar that non-privileged propositions must clear, by seeking disconfirmation of them), disconfirmation bias (also known as the quantity-of-processing view) consists of trying harder to get the privileged propositions over the bar than to get the non-privileged propositions over. Disconfirmation bias occurs because challenges to one’s existing attitudes create negative affect (that is, concern or anxiety) that intensifies cognitive processing (Schwarz, Bless, and Bohner 1991; Ditto and Lopez 1992; Ditto et al. 1998). As with confirmation bias, people do not realize that disconfirmation bias leads to unreliable evaluations of alternative hypotheses.

      “While the epistemic double standard explains why we would usually see motivated reasoning, it also suggests why we would sometimes see counter-motivated reasoning (Elster 2007, 39), or the confirmation of unpleasant beliefs. In some cases, what we fear is set up as the default hypothesis that we then seek to confirm.”

  3. phi says:

    From a physics perspective, that sounds a lot like getting stuck in a local minimum. If I have a view, and some reasons for holding it, then I might ignore a piece of contrary evidence because the reasons for my own view are much stronger. Then another piece of contrary evidence comes along, and I ignore that too. If I do this enough times, then I will end up with an incorrect view, even if I had enough observations to deduce the truth, simply because my brain is unable to jump out of the local minimum and choose the theory that best fits the current set of observations over the one that looked promising at the start.

    This also leads to a prediction: It seems very likely that changing one’s mind about many facts at once is more difficult than changing one’s mind about just one or two facts at once. So a local minimum made of many interlocking misconceptions will be more difficult to get out of than an individual misconception. A good name for a local minimum made of many interlocking misconceptions might be “ideology”. Then I would guess that naming an ideology greatly lessens its power. After all, it is much easier to entertain the idea of “what if Christianity was not correct” than the idea “what if there was no God, and no afterlife of any kind, and no Devil, and the universe is 13 billion years old, and living things gained their forms through repeated mutation and selection rather than intentional design, etc, etc.”

    • DavidS says:

      I really, really like this analogy (though presumably because I know no physics I’m more aware of the idea of a local MAXIMUM than minimum and I feel it’s more intuitive to think of it that way round – people are at the best and most coherent local position so any change on any matter of note creates dissonance.

      The bit where the evopsych that started this comes in is that the ‘best and most coherent’ original position includes views that are desirable for less rational reasons eg assuming those you politically disagree with are personally corrupt while dismissing evidence of the same for thos you agree with.

      • peak.singularity says:

        An easy way to understand why it’s called a “minimum” in physics is to think of a ball rolling downhill and where this ball is going to end up in a hilly landscape.

    • Hoopdawg says:

      I would guess that naming an ideology greatly lessens its power. After all, it is much easier to entertain the idea of “what if Christianity was not correct” than (…)

      Oh, but that phrasing would suggest merely naming an ideology would be sufficient here – and it’s most likely not. Christianity can be pointed at because its adherents will usually self-identify as such – but this is not necessarily true for other ideologies. In fact, perhaps as an adaptive strategy, a phenomenon arose recently where people subscribing to certain sets of interlocking assumptions would describe them as, say, “just common decency” or “just basic economics”. Names have been invented for them, but are easily dismissed as uninformed misnomers, and if you use them, you’re most likely just preaching to the choir.

      • peak.singularity says:

        Reminds me of the surprising pushback I got when I started discussing Capitalism using this term. Typical “Fish not noticing water” issue ?

  4. daneelssoul says:

    I mean sure, you shouldn’t reverse a strong prior based on a single weak piece of contradictory evidence. But you shouldn’t ignore the evidence either. And you definitely shouldn’t exhibit the backfire effect if you want to converge on accurate beliefs.

    So really, I think this only explains a fraction of confirmation bias at most.

    • spork says:

      I was thinking something similar: Let’s not blame people for bad priors. That can happen to even the most rational of Bayesians. Let’s instead blame people for huge asymmetries in how they weigh prior-confirming and prior-disconfirming evidence. If they freakishly and consistently overvalue (“bias”) confirming data and undervalue other data, they’re just not reasoning very well. We can be puzzled about why this happens to some people, but I don’t understand how we’d be puzzled about what confirmation bias is. It’s one of those few interesting phenomena which is named so perfectly that it doesn’t really need much extra description imo!

    • Wasserschwein says:

      The backfire effect has failed to replicate.

  5. Bugmaster says:

    That’s not what “confirmation bias” means. Confirmation bias is what happens when you are faced with multiple pieces of evidence, and you ignore the ones that do not match your preconceptions. For example, let’s say that you are wandering around with a group of friends, and friend A says, “OMG, a polar bear just ducked into that alley”. Friends B and C say “what polar bear, we were looking right at the spot and saw no polar bear”, and friend D says, “actually, I did see a really burly dude in a white fur coat”. If you ignore what B, C, and D say, and use A to boost your P(“polar bears exist”) even higher, you’re engaging in confirmation bias.

    • mcpalenik says:

      I think that’s exactly what Scott said. It’s a failure to weigh the evidence and update your priors sufficiently. Each bit of evidence should shift your priors a bit until you mostly believe there’s a polar bear in campus, but instead you get stuck in the no polar bears belief even after multiple sightings.

      • eric23 says:

        I agree with Bugmaster, and did not see this in what Scott said.

        • North49 says:

          ….but now you’ve updated your priors in it?

        • mcpalenik says:

          He says:

          When new evidence challenges our established priors (eg a friend reports a polar bear, but I have a strong prior that there are no polar bears around), we ought to heavily discount the evidence and slightly shift our prior. So I should end up believing that my friend is probably wrong, but I should also be slightly less confident in my assertion that there are no polar bears loose in Berkeley today.

          and

          The anti-vaxxer is still doing something wrong; she somehow managed to get a very strong prior on a false statement, and isn’t weighing the new evidence heavily enough.

          (emphasis mine)

          • Bugmaster says:

            IMO the classic example of confirmation bias is when a psychic cold-reads people. The psychic will make literally hundreds of guesses, but the victim will only remember the one or two hits, and ignore the misses — even though even a single miss should expose the supposedly infallible psychic as a fraud. I didn’t that get impression from Scott’s post, but maybe I was wrong.

            But I do agree that, in the larger scheme of things, you could acquire a confirmation bias as just a normal effect of Bayesian reasoning. If your confidence in polar bears or psychics is (1-epsilon), then there may not be enough evidence in the world to convince you otherwise. You can keep updating your confidence down by epsilon every time you encounter a piece of negative evidence, but you’ll still die of old age before you even clear 0.99. Perhaps having an abnormally high prior about anything is a bias in and of itself.

    • North49 says:

      I’m maybe not as well versed in the literature as many here, but I don’t see the conflict. I went to Australia, and it’s something of a national prank there to try to convince tourists that there is a dangerous relative of the koala bear called a ‘drop bear’. We were in several towns across the continent and were told about drop bears many times by unrelated parties hundreds or thousands of miles away from each other. What is your proposed attitude towards encountering this evidence? My priors led me to believe that I would be aware of such an animal if it existed, thus my conclusion was that this was in fact a nationwide prank, as unfeasible as that sounds on the surface. Should I have updated my belief instead to that drop bears were a well kept nationwide secret? I weighed the multiple peices of evidence as consistently very low based on my priors, and weighed one headline in a Google result later as very strong, simply because it was in line with my priors. Multiple data points, but weighted very differently depending on their conformity with my priors. This seems to bridge your and Scott’s arguments?

      • Jiro says:

        That’s a tricky case because evidence doesn’t add up linearly. If there are already other types of evidence such as photos and encyclopedia articles, eyewitness reports are additional, good, evidence. But if all there is is other eyewitness reports, eyewitness reports are not good evidence.

      • ThaomasH says:

        This is sort of different. A person new to a place has no prior about which animals are dangerous, so even one report (we assume people tell the truth) would lead you to believe that there are dangerous “drop bears.” And even one more unrelated prankster would create a pretty strong prior.

      • Emby says:

        A nationwide prank is actually a pretty reasonable hypothesis under any circumstances. Ingroups will often attempt to fool outgroups into thinking ridiculous things, as a method of building social cohesion (and I guess as a form of ‘stress testing’ the newcomer – if you respond in a non-angry non-aggressive manner to this mild kind of hazing, you’re probably trustworthy)

        See also – sailors sending new recruits to the stores for a long weight. And Scottish people will attempt to persuade you to come on a haggis hunt with them (a haggis is a little furry creature with one leg longer than the other, so that it can run round the mountain more efficiently)

        Did anyone let you in on the secret of the Hoop Snake, and people who hop to work on the back of a kangaroo?

    • mtl1882 says:

      But how often does such a situation happen in real life?

      This happens in situations where things are a lot easier to finesse and tend to be abstract and narrative-driven—as in, things covered in the media. The problem seems to be present largely when the information environment itself is unreliable and distorted, even if it doesn’t look that way at a glance—you’re relying on selection and filtering and framing by others.

      There certainly are real life situations where this happens, but it is rare that people actually witness the same thing in-person and then just contradict each other in a motivated way based on preconceptions. They will instead give excuses or change the framing of the discussion somehow. And outside of the media, you rarely have a chance to keep encountering the same situation and take a consistent side, because “sides” don’t exist as much in reality. Children’s sports, and stuff involving children in general, is probably a good venue for it. Plenty of parents will probably make all sorts of excuses for why their kid doesn’t deserve a foul, while seeing a clear-cut case for everyone else.

  6. Anatoly says:

    If you have a really strong prior, then it should shift very little when you see something that contradicts it *or* something that confirms it. “Confirmation bias” is when you treat confirming evidence as strong and important while ignoring, or giving very little weight, to contradicting evidence. That isn’t what happens when you have a really strong prior and apply normal Bayesian reasoning, I think.

    • North49 says:

      I think the point is that the priors are an input to weighting new evidence, and the magnitude of the weighting is proportional to the strength of the prior. If I have two new peices of evidence that are equally strong for and against a proposition, I weight the one conforming to my prior as stronger, because that is the function of priors. This process repeats itself for all new evidence, creating a positive feedback system in favor of the prior.

      Let’s take the extreme case, where the evidence itself is nearly irrelevant, and merely confirming or contradicting the prior is almost the entire determinant of assigned weight. This just means that the feedback signal is near or above 1. The difference is in the strength of the feedback, not the reasoning process.

    • sty_silver says:

      Yes.

      Some of the responses are arguing that this is what Scott was saying. I don’t think so. Look at Bayes’ formula:

      P(A|B) = P(B|A) * P(A)/P(B)

      Scott was basically saying “well if P(A) is small enough, then P(A|B) will be small, and that’s confirmation bias. So the same thing happens in the vaccine and the polar bear example.” But that strikes me as a very poor definition of confirmation bias that doesn’t capture what is usually meant by it. Instead, I’d argue confirmation bias is to

      1. Overestimate P(B|A) whenever you want P(A) to be true (“Yes, this is totally consistent with what I believe!”); or

      2. Not updating at all

      That is, I’m pretty sure that’s what vaccine-cause-autism people are doing, and I would call that confirmation bias. That’s meaningfully different from what’s happening in the polar bear example. There, P(A) is just legitimately so small that P(A|B) remains small.

      Suppose you had two different people independently tell you that they saw polar bears outside of their window – and you are somehow certain that they cannot have coordinated this as a joke. Would you believe them now? You probably would, and that’s strongly non-analogous to the vaccines-case.

      • Johnny4 says:

        I want to stick up a bit for Scott here. Do you update your credence in anti-vaxxerism every time somebody tells you about how someone they know got autism from getting vaccinated? Presumably not, because you’re convinced that this is bogus “evidence”. Or if you do update, it’s an infinitesimal update.

        Anti-vaxxers are convinced, wrongly, that all the “evidence” for vaxxerism is bogus evidence, and so they likewise don’t update at all, or only infinitesimally, when they encounter vaxxer evidence.

        Conspiracy theories like anti-vaxxerism are sorta like skepticism: we can say, from the outside, that something irrational probably had to happen for someone to come to believe a conspiracy theory or accept skepticism. But once you’re there (once you accept the theory or become a skeptic), it might be rational for you to stay there in the face of evidence that should, if you weren’t already there, convince you not to go there.

        Obviously, it’s a presupposition of what I just said that people sometimes update incorrectly, and that’s obviously true anyways. But in most contexts I don’t think we have enough information to distinguish between whether someone has crappy priors or is doing a crappy job updating. And if it looks like really bizarre updating, it might be more charitable (plausible?) to attribute weird priors to them. That’s just a hunch though, I’m not super versed in this literature.

        • sty_silver says:

          I want to stick up a bit for Scott here. Do you update your credence in anti-vaxxerism every time somebody tells you about how someone they know got autism from getting vaccinated?

          Yes, I think I absolutely would. It’s never happened, but if people get autism from being vaccinated, that’s serious evidence.

          If you showed me some source (video or article or whatever) of someone claiming this, that would be very different – the world is big enough that I know such cases exist, and there is a massive selection effoct going on, so it’s not surprising that an anti-vaccine person knows such soruces. Differently put, the existence of one such source is very weak evidence about how often it happens. But if it’s a person I know in real life – that would be a strong update.

          • Johnny4 says:

            Wait, I didn’t say that you found out that somebody did get autism from being vaccinated, just that somebody told you that somebody they know got autism from being vaccinated. If you found out that somebody actually did get autism from being vaccinated that would be serious evidence indeed!

          • sty_silver says:

            Oh – sorry that I didn’t read your post properly.

          • Joseph Greenwood says:

            I personally know several people who have autism, and were vaccinated (maybe autism causes them to be vaccinated—I dunno, correlation does not imply causation). Their parents tell me that they did not display signs of autism before they were vaccinated. Is it rational for me to be an anti-vaxxer? If these parents personally know each other, is it rational for them to be anti-vaxxers?

          • Johnny4 says:

            @Joseph Greenwood

            I’m not sure if your question is directed at me, but from a Bayesian perspective sure, your credence that vaccines causes autism should go up a bit after hearing those stories from people you know. That wouldn’t make it rational for you (or the parents if they talk to each other) to be anti-vaxxers though! As you note, correlation doesn’t imply causation, and given that almost everybody gets vaccinated very early in life, you’d expect that almost everyone diagnosed with autism had been vaccinated previously. Credence bumps can be so small as to be insignificant. But they’re still there.

        • voso says:

          I have several priors towards vaccines. I have a strong prior for vaccines being safe and effective, and I have a strong prior for existence of a significant deranged movement who preaches about the dangers of vaccines while completely ignoring all evidence.

          Where I hear the claim that vaccines cause autism determines which prior it is tested against; if I see a shared Facebook post from someone I went to high school with from the “What Science is Hiding from you Foundation”, it is likely to confirm the prior that an antivax movement exists but if, say, Scott were to tell me “Hey, against all conceivable odds some new evidence has come to light…”, then I could see my prior for vaccines being safe actually being moved somewhat.

          Maybe there’s something here about priors inoculating against other priors? (No pun intended)

        • Purplehermann says:

          Anti vaxxers are all coming from the same places essentially: putting stuff in babies to mess with nature is scary, especially with stuff that is known to be toxic to humans (mercury and aluminum, though this particular mercury has been shown to be fine) or really gross, there are more than a few cases where parents of children who had bad reactions blame the vaccine, vaccines haven’t gone through enough gold standard testing for them to be comfortable with this, and how can you trust the science when there is so much money in vaccines. (I am sympathetic to some of these worries but the pros far outweigh the risks imo).

          Given that we understand the claims, you generally shouldn’t update much (or at all) when someone says something along these lines. You already expected some percentage of people to have this opinion and took that into account, there is usually no reason to update your belief on the issue because of confirmation that people act the way you expected them to.

          Same thing for anti vaxxers when confronted with anything besides double blind long term tests with controls built to find the issues if they exist (and being monitored by some know anti vaxxers or it being otherwise verified that there wasn’t any bad science going on would be very helpful). They knew they’d be shown what you are showing them and still believed otherwise.. showing them won’t make them update

          • sharper13 says:

            @Purplehermann, I think it’s a mistake to belief you can generalize all anti-vaxxers like that. At best, perhaps all anti-vaxxers you have direct experience with, if that’s true?

            My experience is that an anti-vaxxer may range from someone with an IQ of 90 who lives in Seattle and gossips about it with her left-wing neighbors, to someone who is a smart, but full-on alt-right conspiracy nut, to someone who has read and understood the related science and knows better than your Doctor does which mandatory vaccines are the riskiest and pass a cost/benefit analysis and which ones aren’t, plus how many people have an active immunity for each.

            The respect (in terms of updating priors) I’d give someone’s argument with each of those three sets of knowledge is very different, even if you can lump them all into your larger category of being an anti-vaxxer.

          • Purplehermann says:

            @sharper13 there is definitely a range of views, but I would contend that they are mostly coming from the same set of reasons (though they won’t all use all of them, their reasoning will be taken from this set).

            The 90 IQ gossip thinks the way she does because of some of these things (or because someone said it was true, but that kind of reason is applicable anywhere, and you shouldn’t update your priors more because of secondary sources if you’ve already updated for the primary source).

            The alt right conspiracy nut is relying on these worries. (I’ll concede that here they might distrust science more than normal antivaxxers, but distrust in the same group on the same issue because you believe that the group doesn’t have your best interests at heart is pretty much the same thing.)

            The last instance would fall outside my generalization, and you might want to update your priors a bit if you run into someone like this.

        • DPiepgrass says:

          “Do you update [for] anti-vaxxerism [when] somebody tells you how someone they know got autism from getting vaccinated?”

          Consider this: do you update for anthropogenic global warming when somebody tells you that their region got warmer after millions of their countrymen burned gasoline in their car? As a member of climate science anti-misinformation group, this wouldn’t shift my prior toward “CO2 causes global warming” at all, since I recognize it as the fallacy of “post hoc ergo propter hoc”. If someone says “vaccinations caused autism in children I know”, I don’t question whether the children have autism (as this type of lie would be unusual) but I do ask “how do you know it was the vaccines that caused the autism?” and their answer is probably “they had vaccines, THEN they got autism.” Post hoc ergo propter hoc. (I could also ask how they know that the child did not have autism before the vaccines, or why they don’t blame bottle-feeding for the autism, or whether it was specifically the MMR vaccine that caused the autism, which is the vaccine in the Wakefield study that kicked off anti-vaxxerism – the person is unlikely to have good answers to these questions either, but I digress.)

          If, on the other hand, someone says they read a study that said the melting of ice in Greenland could cause the AMOC to shut down by about 2300, causing a redistribution of heat away from Europe, I would shift my prior in that direction. This piece of evidence is “hearsay” like the autism example, but the purported source now is a scientific study which has a lot more credibility to me.

          Science deniers (whether anti-vax, anti-AGW or anti-evolution) tend to have a selective interpretation of particular fields of science. Some of them distrust scientists in general, but more often they can trust physicists, chemists, biologists, and ecologists (speaking about their area of expertise), while singling out vaccine experts, global warming experts or evolutionary biologists (depending on which sciences they oppose) as untrustworthy, so that evidence from those experts is specially discounted. But this is not a simple phenomenon; they have developed a web of interlocking beliefs (some false, some true, some half-true) that work together to reject information that would challenge their belief. I’d recommend Denial101x on edX for those interested in learning about the web of myths involved in climate denial, but the course doesn’t explain how such webs develop in the first place, which is the really interesting question to me. Dark Side Epistemology, I guess. (Note: I’m a volunteer moderator on Denial101x.)

          • DPiepgrass says:

            I should add that the “post hoc ergo propter hoc” fallacy doesn’t reduce the evidential value to zero by itself. The reason I don’t shift my prior also includes background knowledge like “scientists have studied this ad nauseam in large populations and found no link to autism” and “autism is diagnosed later than it exists” and “any number of other factors could be the cause” and “this person probably has no more information about causation than I do”.

            Notice, however, while the bare claim “someone I know got autism from getting vaccinated” has no sway with me, it is not that I cannot be persuaded! If evidence could not persuade me, I would be no better than the anti-vaxxer. I don’t ask “how do you know it was the vaccines that caused the autism?” as a way of showing my intellectual superiority, I ask it in case they have better reasoning than just “B came after A, so A caused B”. See also How to Convince Me That 2 + 2 = 3.

  7. NoRandomWalk says:

    I wonder if motivated reasoning is actually a tradeoff versus computation costs of dealing with unobserved evidence. If I see a bunch of good sounding arguments for X that I later determine are wrong, I will go about updating as strongly from excellent arguments for X as from weak arguments for not X.

    I think this is a generally reasonable way to think. Sure, we can cherry pick examples like vaccines ‘surely all right thinking people know they good with no downsides’ but then we hear another better than usual argument for god, decide that is wrong, and then believe in god less because if an unusually good argument in favor does not persuade, all unobserved arguments which are unusually good will probably also not persuade.

    • ThaomasH says:

      Is it rational to disbelieve a proposition if one sees a lot of bad reasons give for the proposition? I seems so to me [I know logically it is not] and I can think of an issue in which I concluded ~x after seeing too many bad arguments for x. (Or was I already biased in favor of ~x and just used bad arguments for x for confirmation?)

  8. benf says:

    “Bias” always needs to be understood with its original meaning, which is “weighting” and not “mistake”. Biases are not always mistakes…in fact, the problem is, most of the time a bias is NOT a mistake…but SOMETIMES it is. So yes, “confirmation bias” just means “giving weight to priors”, which is a good idea in most cases. In science, though, we don’t want our priors to influence our experimental results at ALL, because that’s another variable we would need to account for.

  9. Siddharth says:

    Along similar lines, see this paper “Hindsight bias is not a bias” by philosopher Brian Hedden. Abstract below.

    Humans typically display hindsight bias. They are more confident that the evidence available beforehand made some outcome probable when they know the outcome occurred than when they don’t. There is broad consensus that hindsight bias is irrational, but this consensus is wrong. Hindsight bias is generally rationally permissible and sometimes rationally required. The fact that a given outcome occurred provides both evidence about what the total evidence available ex ante was, and also evidence about what that evidence supports. Even if you in fact evaluate the ex ante evidence correctly, you should not be certain of this. Then, learning the outcome provides evidence that if you erred, you are more likely to have erred low rather than high in estimating the degree to which the ex ante evidence supported the hypothesis that that outcome would occur.

  10. meh says:

    alongside confirmation is the backfire effect, where the new evidence is weighted in the wrong direction. is something wrong happening here or is it again a misfire of normal reasoning?

  11. Freddie deBoer says:

    This post would serve pretty well, I think, as a much broader critique of the Rationalism movement writ large; there is no objective position to sit in from which to ascertain the (mis)functioning of one’s own cognition. (Or rather, there is a theoretical objective position that a priori can’t be accessed through the perceptive/discriminating apparatus that are the inevitable consequence of being a brain. The eye can’t look at itself or however the cliche goes.) And each new layer of metacognition that you stack to address the problems with the previous layer’s perspective itself implies the necessity of another layer to observe it. Ad infinitum.

    • Conrad Honcho says:

      there is no objective position to sit in from which to ascertain the (mis)functioning of one’s own cognition.

      Does the Rationalist project claim any such thing? I don’t think Rationalists claim they are searching for the “objective position,” rather they are attempting to employ techniques to become increasingly “less wrong.” Each technique is another of those layers you mention.

    • Bugmaster says:

      At the very least, it’s a strong argument against engaging in the kind of fetishization of the Bayes Rule that was popular on Less Wrong back in the day. Yes, it’s a powerful formula, but it won’t automatically make you come up with the correct answers. If your prior for “ghosts exist” is ~= 1, then you’re still very likely to shell out money to psychic mediums, Bayes rule or not.

      • Brassfjord says:

        It’s a rationalistic fallacy to treat beliefs as mathematics. Bayes formula doesn’t help at all in normal thinking.

    • HowardHolmes says:

      +1

      It seems obvious to an outside observer (which as you say does not exist) that rationalists are no less wrong than anyone else..

  12. Markus Karner says:

    Scott,

    as Kaj above said – Sperber and Mercier and the argumentative theory of reasoning. I’d love your take on that opus. Several articles in the last decade, and an entire book. Also kinda ties in nicely with predictive processing (only see what you want to see / can only see what you want to see).

  13. sclmlw says:

    Combine confirmation bias with a strong human bias against unexplained phenomena. Maybe this is because under normal circumstances even a bad model is better than no model at all. Presumably it will quickly run into problems and Bayesian refinements will correct it over time. You can’t refine/improve your model if you don’t have one in the first place, so the brain wants a story it can latch onto.

    A bunch of experts say “We’ve looked at the data and we can’t point to a single common cause of autism.” This is anathema to the drive to explain the outside world, so a bunch of people latch onto any available model, and the closest one is “vaccines cause it”. The problem is that the evidence to correct that model isn’t overwhelmingly apparent to an outside observer, so Bayesian corrections are never sufficient to improve the model.

  14. uau says:

    maybe people naturally overweight priors because that was adaptive in the ancestral environment.

    This seems reasonable from a non-social viewpoint too, especially when considering beliefs about risks (such as the incorrect beliefs about vaccines, and believing “this food is poisonous” about some perfectly safe to eat food as a possibly similar evolutionarily relevant analogue). Your current beliefs have kept you alive so far. Optimal reasoning could be something like “my belief about X being dangerous is likely false, but I should avoid X anyway, because the gains from being right don’t matter that much while even a small chance of great harm from X matters more”. But this requires more complex thinking which separately considers the likelihood of something being true, and separately how seriously you should take any chance of it being true. If you lack capacity for that, ignoring evidence about something being safe is better than “OK it probably isn’t poisonous, I’ll eat lots of it” even if “probably not poisonous” is itself the correct deduction.

  15. probably incorrect says:

    If the no-polar-bears-on-campus prior is very strong, but (on a meta level) the my-friend-is-trustworthy prior is very weak, perhaps it’s not so much a failure to update as it is a canceling out? Seems especially relevant these days when we all rely on a lot of secondary sources.

  16. Hackworth says:

    What am I doing differently from an anti-vaxxer who rejects any information that challenges her preexisting beliefs (eg that vaccines cause autism)?

    For you and your polar bear-spotting friend to be equivalent to an anti-vaxxer and an average doctor, you would have to still not believe your friend after they also provided witness accounts, several hours worth of photographic evidence in the full light spectrum, GPS data from the collar worn by the bear, and independent written testimony from a dozen biologists that the animal in in front of your friend’s house was indeed a polar bear. At some admittedly undefinable point, “insufficiently weighing new evidence” crosses the line into “confimation bias” territory, or whatever you want to call it.

    Also note that considering anti-vaxxers, flat-earthers, young-earth-creationists, and what have you as having involuntary failure modes of reasoning is sometimes (occasionally? often?) missing the point. Sometimes, the denial of what a vast majority of the world regards as reality is a semi-conscious, passive-aggressive rebellion against “the establishment”, against the elites in society who have supposedly monopolized the interpretation of reality and who are using jargon far outside the average person’s understanding. These people then try to claim a corner of knowledge for themselves and like-minded individuals, the latter of which of course vastly simplified by the Internet and especially social media with its inherently gamified structure.

    Maybe the issue runs even deeper. Maybe these fringe beliefs are one of the uglier outgrowths of extreme individualism, or rather of the tension between individualism and collectivism. Every day of one’s life, starting no later than school, the individualistic systems reward or punish the individual for their qualities or lack thereof. Everyone is expected to be smarter, stronger, faster than anyone else, to excel, but by definition, not everyone can excel over everyone else. Even the well-meaning “everyone is valuable/useful in their own way” implies there is a measurable difference between everyone, and there are only so many dimensions to measure. On the other hand, for any group (from sports club to nation state) to function, its members need to agree on a certain set of beliefs and practices, or else you can hardly call it a group.

    I find this to be a fundamental, irresolvable conflict in any society, where shifting the individualism-collectivism slider from one end towards the other does nothing to resolve that conflict. What others call a cognitive bias is sometimes the result of an attempt to unify these contradictory requirements.

    • sclmlw says:

      I think there’s an evolutionary drive against total conformity. Total conformity risks the survival of the species, and so should be selected against. Let’s say something is pro-adaptive and increases your genetic representation in the population significantly, but once every 100 years engaging in that strategy wipes you out. Sort of like living in the flood plain for a 100 year flood. Except instead of hard-coding “don’t live in this specific flood zone” into human genetics, the hard-coded survival strategy is “ensure 0.1% of the population always resists conformity to every possible scenario”.

      At an individual level maybe you waste your life being a ‘prepper’ in South Dakota, and that sucks for you. Maybe you leave your kids susceptible to measles for no good reason because you don’t vaccinate. The genetic program isn’t concerned with whether any individual strategy is actually pro-adaptive in some weird unpredictable apocalypse scenario. Most won’t be – including anti-vax and flat-Earth. At a population level it doesn’t matter that generations of people wasted their lives following thousands of wrong-headed and even harmful ideas, so long as every 5,000 years or so something nobody foresaw – but 0.1% of the population irrationally rebelled against – misses destroying 100% of the species because of irrational non-conformity.

      Against that genetic program, there is no amount of rational discourse that will convince the last few percent of the population. But should it?

      • Snickering Citadel says:

        Evolution does not work that way. It does not plan far ahead. If conformity was that beneficial, everyone would evolve to conform.

        But I agree conformity can be both beneficial and harmful. Say the tribal leader said no other men are allowed to have sex, only I do. Then it would be better to revolt against him. So most humans have instincts to conform and rebel, depending on their situation. But people have different ratios of conformity.

        • Doesntliketocomment says:

          I disagree. This is exactly how evolution works, planning does not factor into it. Even though it frequently penalizes individuals, divergence through mutations and sexual reproduction has shown itself to be a stable strategy for populations, so it continues. Likewise certain amounts of non-conformity are stable over the long run: When the non-conformists win, they win big, which ensures that non-conformist genes are passed on.

      • mtl1882 says:

        I think there’s an evolutionary drive against total conformity. Total conformity risks the survival of the species, and so should be selected against . . . Except instead of hard-coding “don’t live in this specific flood zone” into human genetics, the hard-coded survival strategy is “ensure 0.1% of the population always resists conformity to every possible scenario”.

        Pretty much everyone has some degree of this, and it will just work out that some people are on the extreme ends of the spectrum. Which I think is what you’re saying, but you could also be saying that this is a quality found in 1% of the population only, which was what the below reply seems to be rebutting. While there are obvious evolutionary advantages to gaining knowledge from others and cooperating/trusting, total conformity is a failure mode in every way.

        Something with as much of a following as the anti-vax movement is easy enough to understand even without a particularly contrarian nature. Hackworth explained it very well:

        “Also note that considering anti-vaxxers, flat-earthers, young-earth-creationists, and what have you as having involuntary failure modes of reasoning is sometimes (occasionally? often?) missing the point. Sometimes, the denial of what a vast majority of the world regards as reality is a semi-conscious, passive-aggressive rebellion against “the establishment”, against the elites in society who have supposedly monopolized the interpretation of reality and who are using jargon far outside the average person’s understanding. These people then try to claim a corner of knowledge for themselves and like-minded individuals…What others call a cognitive bias is sometimes the result of an attempt to unify these contradictory requirements.”

        When abstract and confusing information from experts far away seems pushed on you, it is natural to resist. This may not be mainly a factual denial thing, but a protest against being forced to parrot things that don’t make sense to you and seem overly strident. This seems to me like a healthy reaction that would have worked better for mankind for most of history — trusting mass-diffused distant expert knowledge in all areas of one’s life only became something worth considering quite recently. Most have come around, but, again, I suspect most people have a few things they maintain unconventional beliefs on or at least have doubts about, if you were to dig deep enough. It’s just for some people, these issues happen to become hot button issues, and if a community forms around the issue, there’s a template for more with similar instincts to project them onto. A lot of people feel that *something* is off with something they don’t understand, but don’t know what it is. Most people find some available rationalization or push it aside. A few attach the feelings to a public “conspiracy theory.”

      • Total conformity risks the survival of the species, and so should be selected against.

        That’s the wrong way of putting it. Evolution isn’t selecting for the welfare of the species.

        Say rather that, if almost everyone believes X, which is probably but not certainly true, then there is a reproductive payoff to believing not-X. With (say) .99 probability, believing not-X reduces your reproductive success by a little, with a .01 probability it increases it by a lot, because after everyone else is killed through believing X, you and those like you survive and repopulate your niche.

  17. Snickering Citadel says:

    Say Scott were a member of a religious cult whose core belief is that there are lots of polar bears wandering around Berkeley. And the cult considers everyone who says otherwise to be evil.

    So Scott might reason “The evidence don’t support that there are polar bears here, but if say that my friends will think I am evil. I better say I believe in polar bears.”

    But maybe it is only subconsciously/instinctively he realizes that he should say there are polar bears. And this affects his conscious reasoning. So consciously he really thinks there are polar bears.

    People could have evolved to be irrational this way, because people who did not follow the religion of their tribe were more likely to be killed/banned/disliked by the other members of the tribe.

    Political beliefs tends to be more rational that religious beliefs, but people still often get angry at people who hold different political beliefs and consider them evil.

  18. robirahman says:

    After making an observation, believing something you were previously expecting or something you weren’t expecting is just a posterior belief. It may or may not be based on accurate evidence and sound reasoning.

    Confirmation bias is specifically when you haven’t updated enough from your prior in the direction of the evidence, because the evidence contradicts the priors.

    • North49 says:

      Your saying people recognize the quality of evidence against their prior, and choose not to update? I don’t think that tracks. It seems as though people don’t update because comparing the evidence against their prior, they then judge the evidence to be weak/low quality.

  19. Stephen says:

    Related: rational actors who discount the evidence of people they disagree with can arrive at stable but opposite beliefs. This is a philosophy of science paper trying to get at how a group of people composed of individuals who are each individually trying to arrive at the truth might bifurcate into two sub-groups that each tend towards the extreme on an issue.

    They assume that: The environment is noisy, everyone shares all of their data, and people distrust people who have different beliefs from them. Aside from the distrust of people with divergent beliefs, the actors are rational (distrusting others is only irrational in this model because everyone in the model always tells the truth, the whole truth and nothing but the truth).

    I find the model fairly compelling, but they have yet to test it with actual data from real people, so it remains merely an interesting theory for how this sort of thing could happen.

  20. Randy M says:

    Assimilation into one’s tribe required assimilation into the group’s ideological belief system.

    Is this true? How do we know? It seems like an assumption made just to make moderns feel superior for not having hive minds.
    I’m sure assimilation into tribes required not calling bunk on the chief’s sister-in-law when she says the spirits favor his continued rule, but I bet plenty of people doubted it secretly. Is this all that’s meant?

    • NoRandomWalk says:

      The soviet experience suggests strongly that for your survival it is much more efficient to actually believe the party line, so you don’t spend a lot of time trying to reconcile what you’re supposed to say with your reality because you have so little power to change things.

    • bullseye says:

      Everybody does this. That’s what ideological “bubbles” are. You believe what your community believes.

  21. kalimac says:

    The conditions under which this reasoning fails is if the assumed established priors are not, in fact, established.

    The flaw that makes “confirmation bias” different from normal reasoning lies in the tendency to take false priors as established.

    • Bugmaster says:

      What do you mean by “false priors”, though ? That is, how can I personally tell which of my priors are false ?

      You could say, “study the literature on the topic”, but what if my priors include the belief that the literature is unreliable ?

      • kalimac says:

        The condition I was discussing is if the priors are, in fact, not established.

        Inability to distinguish false from established priors is a syndrome with a definite etiology.

  22. Hyman Rosen says:

    It is often the case that claims of confirmation bias are made by people who are advocating for the other point of view, and so are not themselves believable. If the argument is, you should accept this evidence that I am right, and the meta-argument is, if you don’t, that is evidence that you have confirmation bias, so you should abandon your confirmation bias and accept that the evidence says that I am right, the meta-argument is no more convincing than the argument.

    Furthermore, evidence that one side finds convincing can be markedly less so to the other side, and there is sufficient weight of history to support them. We’ve had eugenics movements, Lysenkoism, racial science, recovered memory, and other such things where the imprimatur of science was claimed to prove that things were true, and people who dissented should be destroyed or shunted aside. It’s valid reasoning to worry that what appears to be overwhelming consensus is actually manufactured.

    For example, transgender activists argue that gender-affirming treatment, medical and surgical, should be made available to children, and claim that science has demonstrated, through many studies, both that these treatments are of benefit psychologically and do not cause harm physically. People disinclined to support such treatments can point to the replication crisis, the size and recency of the studies, and the attempt to ostracize and silence TERFs as reasons to discount that evidence. So which side is guilty of believing what they want to believe? Both, no doubt, and both will claim the other side is subject to confirmation bias in evaluating the evidence.

    Science can neither give us the answers we want nor the answers we don’t want in a timely fashion. It takes time and it takes experiments that cannot be ethically performed, and in the absence of such certainty, belief fills in the gaps.

    • Pink_Creosote says:

      I’d never heard of Lysenkoism before so I did a quick Google search on it:

      Joseph Stalin supported the campaign. More than 3,000 mainstream biologists were fired or even sent to prison,[3] and numerous scientists were executed as part of a campaign instigated by Lysenko to suppress his scientific opponents.[4][5][6][7] The president of the Agriculture Academy, Nikolai Vavilov, was sent to prison and died there, while Soviet genetics research was effectively destroyed until the death of Stalin in 1953.[2] Research and teaching in the fields of neurophysiology, cell biology, and many other biological disciplines was also negatively affected or banned.[8]

      -from Wikipedia

      This doesn’t sound like a good example to give to show that we should be more skeptical of scientific consensus. It sounds like Lysenkoism was artificially supported by the ruling power basically killing or imprisoning anyone who questioned it. This sounds basically nothing at all like current scientific consensus that e.g. vaccines have no links to autism. If you want to run studies on why autism is caused by vaccines, you are more than free to do so. And until those studies in favor of that hypothesis exist anyone who denies scientific consensus on this absolutely is falling victim to confirmation bias.

      • peak.singularity says:

        Why not ? The pharmaceutical industry is not generally known to be trustworthy. AFAIK it’s very hard to run independent studies in some medical domains, because most qualified people would also have a conflict of interest. And then the distrust spreads to other domains…

  23. hopaulius says:

    I’ve lived in Berkeley, and my family has had a negative vaccine experience. Not sure these things are sufficiently similar to support an analogy.
    Polar bear in Berkeley: I have never seen a wild polar bear except in a zoo. I have seen multiple coyotes, mostly in rural areas. I wouldn’t expect to see a wild coyote in Berkeley, but there is a large redwood park in the hills above the city, and it’s possible one could come in. I suppose a polar bear could escape from a circus or a zoo and wander the streets of Berkeley. But I rate that as much less likely. Both these are based on my lay knowledge of polar bears and coyotes, and to a lesser extent, my personal experience.
    I just received a shingles vaccine yesterday. I keep all my inoculations current, and I encourage people I know to vaccinate their children. I do, however have reason to be wary of vaccines. My daughter had a high fever and a seizure a few days after having the MMR vaccine. She was just a few months old. I looked at the info. sheet from the vaccine, and indeed it warned that high fever and seizure were possible side effects. We discussed this with her pediatrician, and he said, in essence, “Impossible.” We didn’t administer the MMR boosters, but we did have her checked for titers, and she is immunized (now 24 years old). No second dose necessary.
    So I have had personal experience of a negative side effect following a vaccine, but I cannot swear with certainty that the vaccine caused the side effect. There are a couple other cultural forces at play with anti-vaxxers. They might, for example, have heard every progressive on the planet screaming about evil pharmaceutical companies trying to poison people for profit. They might understandably be confused by the apparently contradictory message from these same screamers, as well as some conservative screamers, that vaccines are perfect and that the government should force all children to take them. They might also have personal experience with a negative health episode following a vaccine. They might actually read about the possible side effects of vaccines on the information handed to them by their pharmacist. They might, like me, be older and have survived mumps, rubella, measles, and chicken pox. They might have heard stories about people like my parents’ generation, who intentionally took their children into a household with one of these active diseases, in order to get them naturally immunized.
    In conclusion, I would argue that neither a report that a polar bear is strolling the streets of Berkeley nor a report that there is a small chance of a having a negative outcome from a vaccine is a sign either of credulity, stupidity, ignorance, or confirmation bias. I would add that it is almost never the case that decrying the out-of-the-mainstream beliefs of someone as stupid or evidence of confirmation bias will induce them to listen to contrary information. One must begin by admitting the absurdity of the absolutes that oneself holds (everyone must be vaccinated by products produced by evil corporations who are trying to poison you), so that one can overcome one’s own confirmation bias, enabling one to listen to the person one is trying to persuade.
    Final anecdote: Yesterday I filled out paperwork for a school district job. One of the forms requested immunization information. I recently moved, and any relevant paperwork is packed in boxes. I spent several hours chasing down my immunization info. When I returned it to the HR person, she asked me my birth year. It was before the earliest date needed for vaccine info. She said, “You don’t need this” and discarded the form. Chew on that one.

  24. Protagoras says:

    It’s not a sharp line, but I think the pattern for cases where we identify irrational biases instead of simple heuristics is that they involve overconfidence in the heuristic. This is most clear in cases which involve believing in the output of the heuristic when it is explicitly contradicted by information available to us which is objectively more informative.

  25. Act_II says:

    This is a good post. I only have two problems with it.

    (1) Ignoring evidence (or even weighting it in the wrong direction!) is different from giving it little weight. No number of journal papers is going to convince an antivaxxer, whereas a principled reasoner can theoretically be convinced by some finite amount of evidence.

    (2) Confirmation bias isn’t just affected by what we believe, but by what we want to believe. If I’m emotionally invested in A being true, I might seek out evidence that supports A and never even be exposed to evidence of ~A. This is a form of confirmation bias, but has nothing to do with the way I actually process evidence and everything to do with the way I look for it.

    So I think this is a useful framing in some ways, but doesn’t capture the whole picture.

    • bullseye says:

      No number of journal papers will convince an antivaxxer, because they believe that the authors of such papers are full of crap. Likewise no number of antivaxxer websites will convince me, because I believe that antivaxxers are full of crap.

      If we could somehow isolate antivaxxers from one another, preventing them from reinforcing each other’s belief, their belief would fade.

      • Act_II says:

        No number of journal papers will convince an antivaxxer, because they believe that the authors of such papers are full of crap. Likewise no number of antivaxxer websites will convince me, because I believe that antivaxxers are full of crap.

        Right, you’re assigning 0 weight to the evidence and not updating your priors at all. I’d say both of these are poor reasoning strategies. If you were a perfect Bayesian reasoning machine, you’d weigh antivaxxer arguments based on their content rather than on their conclusion. Obviously your time is better spent doing something other than poring through antivax sites, because you’d very likely end up with a similar prior anyway. But this isn’t a good way to reason about antivax nonsense; it’s a shortcut you use because you don’t want to reason about antivax nonsense.

        If we could somehow isolate antivaxxers from one another, preventing them from reinforcing each other’s belief, their belief would fade.

        Agreed — but it still wouldn’t be journal articles convincing them. It would be other forms of evidence that they actually might give weight to, like peer pressure.

  26. robsica says:

    There’s no such thing as a “confirmation bias” — there is, however, a “myside bias”, as explained by Hugo Mercier in this essay, in his spectacular book with Dan Sperber THE ENIGMA OF REASON, and in his no less excellent new book NOT BORN YESTERDAY.

  27. Kevin says:

    Tversky was one of my professors in grad school.

    “I still think it’s helpful to approach confirmation bias by thinking of it as a normal form of reasoning, and then asking under what conditions it fails.”

    This was how he framed pretty much _every_ bias. In fact, he referred to them as “heuristics and biases”. And his explanatory framework was explicitly that our brains had evolved to maximize speed and minimize effort by applying heuristics that worked well in the ancestral environment. They only become “biases” when applied to other environments.

    Now, I can’t recall him explicitly talking about individual variation in aggressiveness of heuristic application, but as a psychologist, I’m sure he would have endorsed such variation as highly likely. So some people would be more or less vulnerable to bias when operating outside the ancestral environment (and conversely more effective at operating within it.)

    • Roger Sweeny says:

      You make Tversky sound like Gerd Gigerenzer. Which is surprising because Gigerenzer and Kahneman disagree a lot and don’t seem to like each other (whereas Kahneman and Tversky were decades-long collaborators who seem almost like intellectual twins).

      • Kevin says:

        My memory is that Tversky was on board with the concept of “fast and frugal”, using that term often.

        My recollection of the dispute with Gigerenzer was of a variation of the Bayesian vs frequentist debate. Gigerenzer had some framing of classic K&T probabilistic reasoning failure examples in terms of frequencies that he claimed showed there was no reasoning failure.

        I was concentrating in Bayesian decision analysis at the engineering school, so those arguments made litter sense to me and I agreed with K&T.

        But on the question of heuristics as useful shortcuts in certain circumstances, my impression was that Tversky endorsed it.

  28. J Mann says:

    I’d argue confirmation bias is a function of normal, accurate Bayesian reasoning. (Or at least the naive kind I think most people actually employ in practice.)

    Lets say Bob believes (accurately) that there are coyotes but almost no polar bears in Berkeley and (inaccurately) that there are almost no dogs in Berkeley.

    When three friends of his tell him they saw, respectively, a coyote, a polar bear, and a dog walk down the street, Bob says he believes the first friend and not the second two, but if we test Bob by seeing what odds he’ll take to be $10,000, we learn that Bob actually believes there’s a high probability the first one is true, and a low probability of the last two.

    Then three other friends tell him they also saw a coyote, a polar bear, and a dog. We test Bob’s internal probability of the three statements being true with another set of bets, and we learn that his internal probability of all three has gone up, but the latter two events started much lower and have gone up more.

    Another three friends make the same result, and Bob continues to increase his internal probabilities. Maybe he’s mistaken about the underlying frequency of those animals, or maybe there was a zoo escape – who knows?

    Bob’s not doing the math, and maybe he’d be more accurate if he did, but it looks to me like he’s updating his probabilities based on his priors and new evidence.

  29. blacktrance says:

    Confirmation bias is when you weigh new evidence in favor of your belief more heavily than evidence against it, or look for confirmatory but not contrary evidence. For example, if you perform a biased investigation of whether there are polar bears in Berkeley by going somewhere they’re unlikely to be, like the grocery store.

  30. ksvanhorn says:

    Yes, the math works out: Bayes’ Rule will give an unexpected datum higher probability of being unreliable / wrong than an expected datum, and this is entirely reasonable. But there are still cognitive biases that can lead this process to go wrong. One is the overconfidence bias. If you fail to consider all the possibilities, or fail to assign a reasonable probability to possibilities other than your favored hypothesis, then essentially your prior has tails that are too thin, and you will be too quick to discount unexpected data. The other is the common bias of treating evidence presented by those you consider allies as more reliable than evidence presented by those you consider adversaries.

    • Purplehermann says:

      The final bias makes sense, why would true allies lie to me, while adversaries often have the motive.

      • ksvanhorn says:

        Think ideological allies / adversaries. Just because someone has a different opinion than you doesn’t mean they’re prone to lying to you.

        • Purplehermann says:

          Someone who is my ideological adversary has good reason to lie to me, if we truly disagree on values or something like that, lying is the only way to convince me.

          (There are people who qualify as allies or neutrals despite different views and/or ideologies etc.., and those who qualify as adversaries despite having similar stances, but anyone who actually is an adversary has an incentive to lie to me, and actual allies’ incentives go in the other direction)

  31. TJ2001 says:

    Ignorance is bliss!

    Literally….

    Ignorance comes from Ignore.
    Ignore means you are presented with evidence and Intentionally hand wave it away… Because you are smarter, more enlightened, more righteous, this is the way we do it and You Are Right!!!

    That’s why it’s BLISS!!

    People bring you perfectly good EVIDENCE and “Tut tut tut (wave hand) I think it will be fine!”

    It’s not that the dog doesn’t know it’s not acceptable to eat it’s own vomit because we discipline it every time – it’s hard wired to do it. It doesn’t matter that eating vomit makes it sicker every time…. It goes right back to it’s vomit. Every. Single. Time. Because This time – it will be AWESOME!!!

    • Purplehermann says:

      The sarcasm (cynism?) on the blog in general has been a bit strong for me recently, I hope it gets toned down.
      (Don’t mean to pick on you in particular, you just happened to comment now)

  32. ajb says:

    I think Jaynes argues exactly this in his textbook on the Bayesian approach to probability “Probability Theory:The Logic of Science”,
    in a section called “Converging and Diverging views”, which can be found in this copy of chapter 5

  33. A good example of this is the faster-than-light neutrino anomaly. They looked very hard for problems with their experimental setup, problems they never would have found had they not gotten a result their priors said was false.

  34. pdan says:

    Relevant episode of TWIMLAI: https://twimlai.com/twiml-talk-330-how-to-know-with-celeste-kidd/

    The interviewee claims her lab has found that people update their priors much too strongly at first, and afterwards don’t update enough. Especially problematic is that a sequence of several confirmations is enough to cement a prior.

  35. Purplehermann says:

    Confirmation bias, as in special motivated reasoning where events that confirm your world view are taken as disproportionately strong evidence and contradicting events the opposite, seems to happen mostly on things that are important to the biasee to be right on and they feel their view is attacked despite or regardless of the truth of the view. (Or they feel attacked because of the view itself.)
    Essentially once it’s a fight, you fight. Someone is trying to pull one over on you, you have enemies, and you are looking extra carefully for anything off. You switch from trusting to paranoid. And every time someone tries to bring up their “facts” they are trying to pull one over you or are dupes/pawns of those who are.

    I’ve seen this with anti-vaxxers I’ve talked to (usually they think I’m not trying to ‘get’ them, but do think I’m naive. Have seen both versions though), they get very defensive on the subject, and get more defensive the more well put together your reasoning and sources are. (A stronger enemy or dupe is trying to change their mind.)

    This should be seperated from the usual “hmm yeah I doubt my neighbor thsaw a polar bear in her back yard”

  36. alwhite says:

    Confirmation bias is bigger than just believing and rejecting information you hear. It’s also a pattern of what kind of information you search for, interpret, and remember.

    I believe that red Volkswagen beetles are everywhere, therefore I only remember seeing Volkswagen beetles while discounting all the other cars I’ve seen. I believe that republicans are happier than democrats therefore I search for studies saying this and fail to search for studies saying the opposite.

    I think reducing confirmation bias to just believing and rejecting in the moment, makes it too broad to be a meaningful fallacy and thus you can make it fit any idea. I would say that confirmation bias fails at bayesian reasoning (I have no idea what “normal bayesian reasoning” is) because bayesian reasoning explicitly tries to capture all states. The breast cancer example. Normal reasoning ignores the false positives. Bayesian reasoning explicitly states True/False Positives and Negatives to overcome the tendency of ignoring the false positives.

    The reasoning you do present does seem normal, but I don’t think it’s confirmation bias.

  37. Godfree Roberts says:

    “Humans Are Hardwired To Dismiss Facts That Don’t Fit Their Worldview.”?

    Having been exposed to a few emotionally mature human beings, I’d modify that. It’s a characteristic of immature and unintelligent human beings to dismiss facts that don’t fit their worldview.

  38. ThaomasH says:

    The puzzle is where do the strong priors come from and how do they form. There was a time when “no one” believed that immigration was bad, trade wars good, Federal deficits at full employment good, GMO bad, nuclear power bad, vaccines bad, climate change nothing to worry about, low density zoning good. Now some people hold these vehemently. Yes, I’m showing MY priors, but at least I think I know what kind of evidence would lead me to change my mind.

  39. Doesntliketocomment says:

    Thinking about Scott’s example, I may have spotted a way forward when talking with the conspiracy crowd. Here’s the thing, if your trusted friend said they saw a polar bear walk by their house, you should conditionally believe them, given certain reassurances. These would be that they do in fact know what a polar bear is, that they actually got a good look at the animal, and that they fully understood that their observation was extremely unusual. This last one is the most overlooked, but perhaps the most important. While maybe a polar bear escaped and wandered around their house, it would be an incredibly improbable experience, and for them to not recognize that suggests something deeply wrong with their processing of the world.

    Frequently when people try to convince conspiracy theorists, they do it from the position that the truth is eminently evident, but by adopting that tone they end up reducing their credibility to the audience – “I mean of course there are polar bears around, it’s winter, they’re everywhere. You see them all the time.” Faced with someone sprouting an improbable tale that will not acknowledge it as improbable, the conspiracy theorists just downgrade their trust in the person and reject the argument. Perhaps a better mode of conversation would perversely stress how seemingly unlikely the truth actually is, as a way of building credibility as an observer.

  40. Irenist says:

    Now suppose that same friend says she saw a polar bear walk by her house. I assume she is mistaken, lying, or hallucinating.

    I would’ve at least checked if the Oakland Zoo had recently added polar bears to their mammal exhibits, and if any of them had escaped into the Berkeley Hills, before assuming a friend was mistaken, lying, or hallucinating.

    (I think forestalling carping like mine is why these Russell’s Teapot sorts of thought experiments traditionally involve purple unicorns or whatever.)

    • J Mann says:

      I think those are different priors. I’d argue that reports of purple unicorns would cause you to increase your subjective probability that a purple unicorn was there, just by much less than reports of a polar bear would cause you to update your probability of a polar bear being there.

      With the polar bear, you’d be thinking “maybe there’s a zoo escape, or a wealthy pet owner, or something,” while with a purple unicorn, it would be more like “maybe it’s a prank, or maybe someone has genetically engineered a unicorn and I haven’t heard about it.”

  41. J Mann says:

    On reflection, maybe it makes more sense to look at a contested issue.

    For example, did Judge Kavenaugh assault Christine Blasey Ford?

    I think you’re right that if you started with a 98% probability of one or the other, it would make sense to judge new evidence for your position much differently than against your position.

    And my observation was that most people with an opinion tended to gravitate much more strongly to evidence that supported their opinion.

    But does that mean that people actually thought it was all but certain that he was either guilty or innocent and then operated on quasi-Bayesian principles after that, or that they thought the question was closer but were stuck in cognitive bias after that?

    • Aapje says:

      A lot of people presumably already have strong priors like:
      – Lots of men assault women and accusing a man is costly for a woman, so any accusation is almost certainly true
      – Lots of women confuse strong negative emotions with a lack of consent and strong positive emotions with the existence of consent, regardless of the actual communication.
      – Memories are reliable
      – Memories are unreliable
      – Democrats tend to greatly exaggerate accusations of Republicans because they hate us
      – Republicans tend to greatly exaggerate accusations of Democrats because they hate us

      Then, given that an individual accusation often has poor evidence on way or the other, at least initially, it makes sense to lean heavily on the priors (if we ignore that the priors tend to suck too).

      • J Mann says:

        Thanks – my read of that debate was that if you really pushed people, many would decide that the actual odds were somewhere between 50-50 and 75-25 on one side or the other, but that given those odds, justice required us to come down on one side or the other.

        But people seemed to update as if they thought one side or the other was basically certain.

        Maybe part of the problem was that on an abstract question that doesn’t matter in people’s daily lives, they tend to act as if something is close to certain even when there’s strong evidence that it’s not.

        • Aapje says:

          I think that people are impressed by claims of certainty too much and discount people insufficiently for exaggerating and/or stating falsehoods, making it a winning strategy to claim certainty, even if you have to retreat to a more defensible & limited claim when being pressed.

          Of course, our host has tried to name this tactic.

  42. rui says:

    In this post, confirmation bias is treated to mean “whenever you remain in a wrong conclusion despite being presented evidence against it”. Or something.

    That’s definitely how many people are using the term, and to that extent, I agree. But not all people use it that way. According to wiki, it’s “the tendency to search for, interpret, favor, and recall information in a way that confirms or strengthens one’s prior personal beliefs or hypotheses”. That is, it’s a problem with the method of reasoning. Justifying your belief with cherry-picked evidence. Being presented evidence against, and as a result end up even strengthening your belief, after searching for your side’s standard answers to the newly presented evidence. Those kinds of things.

    All in all, I think this kind of conversation is very necessary.

  43. keaswaran says:

    Andrea Wilson’s paper “Bounded Memory and Biases in Information Processing” (available here: https://sites.google.com/site/andreamaviwilson/research) shows an even more extreme version of this. If one only has a finite state machine to process bits of evidence for and against a hypothesis, rather than a Turing machine with unbounded memory, then the optimal strategy involves only probabilistically responding to new evidence, with the probability being higher if one is already confident of what the evidence is showing and lower if one is confident of the opposite. With large amounts of equivocal evidence, people will tend to become polarized.

  44. Basil Marte says:

    I think confirmation bias can be described as:
    1: Polar bears, here? It’s more probable that she is mistaken.
    2: Since she is mistaken, her claimed polar bear sighting is adequately explained. No polar bears here!

    1. is normal Bayesian reasoning.
    2. is stupid; basically, it uses the conclusion derived from a premise to argue for the truth of the premise. It double-counts the information in the prior.

    If I were to speculate, I’d say that the human brain keeps rounding results.
    -> “polar bears in Berkeley” is at TINY_VALUE.
    1: update on evidence. Label updated hypotheses as “recently changed”.
    -> “polar bears in Berkeley” is at a very small value; too small to represent with anything but TINY_VALUE.
    2*: round “polar bears in Berkeley” to TINY_VALUE. Search for justification and find the hypothesis “she made a mistake”, which is labeled “recently changed”. Great find, the label says it relies on hot, new evidence!
    -> “polar bears in Berkeley” is at TINY_VALUE.
    The overall description of what happened is that the evidence did not move credence in the belief one jot.

    This way, an army of arguments can be absorbed without changing one’s belief, if the update from each argument is eaten by rounding error (each rounding being justified by rehearsing the same argument each time).

  45. notsorational says:

    There seems to be an assumption missing from your model and the evo psych models.

    Humans are not static beings, specifically we start as somewhat naive children that believe anything we are told, and slowly become more set in our ways and beliefs. This may be evolution’s way of approximating Bayesian priors. In childhood you soak up information and skills, learning quickly (and historically, dying most of the time), then as an Adult you stop learning and mostly devote you’re time to surviving. This was probably true in Pre-history and is definitely true today

    Also, the evo psych theory of social pressure is very relevant for other bias, but I am not so sure it is relevant for Confirmation Bias specifically.

  46. sandman says:

    I haven’t read all the comments so I apologize if this is redundant redundant, but your argument recalls the idea of group selection in evolution. I think the idea is that social bonds may themselves exert selective pressure that favor more cohesive groups. Religion is often proposed as such a social bond. Maybe a tendency to ideological uniformity has had selective advantages and as a result, we, i.e. the selected population, are particularly adept at believing our group’s line. This is to say, we’re designed to be suckers. Barnum was right!

  47. Deej says:

    It’s different because you don’t care about whether polar bears exist, so you’re making a rational judgement made based on prior knowledge. Confirmation bias, as I think of it and I presume what most people mean when they talk about it, has an emotional element. I feel good when I see evidence confirming my left wing priors, I feel bad when I see evidence confirming my right wing priors, I’m thus more likely to take the left wing evidence on board.

    Note: I’m not sure who I would believe more though. If a friend who I believed to be of sound mind and not taking the piss, told me he’d seen a polar bear, I would probably believe them on the basis that it would be difficult to mistake a polar bear for anything else. Whereas a dog can probably be fairly easily mistaken for a coyote.

  48. jaredjacobsen says:

    A misfire of rational inference? How about just rational inference? Sam Gershman makes a convincing argument for this in “How to Never Be Wrong“:

    Human beliefs have remarkable robustness in the face of disconfirmation. This robustness is often explained as the product of heuristics or motivated reasoning. However, robustness can also arise from purely rational principles when the reasoner has recourse to ad hoc auxiliary hypotheses.