The Cowpox of Doubt

I remember hearing someone I know try to explain rationality to his friends.

He started with “It’s important to have correct beliefs. You might think this is obvious, but think about creationists and homeopaths and people who think the moon landing was a hoax.” And then further on in this vein.

And I thought: “NO NO NO NO NO NO NO!”

I will make a confession. Every time someone talks about the stupidity of creationists, moon-hoaxers, and homeopaths, I cringe.

It’s not that moon-hoaxers, homeopaths et al aren’t dumb. They are. It’s not even that these people don’t do real harm. They do.

What annoys me about the people who harp on moon-hoaxing and homeopathy – without any interest in the rest of medicine or space history – is that it seems like an attempt to Other irrationality.

It’s saying “Look, over here! It’s irrational people, believing things that we can instantly dismiss as dumb. Things we feel no temptation, not one bit, to believe. It must be that they are defective and we are rational.”

But to me, the rationality movement is about Self-ing irrationality.

It is about realizing that you, yes you, might be wrong about the things that you’re most certain of, and nothing can save you except maybe extreme epistemic paranoia.

Talking about moon-hoaxers and homeopaths too much, at least the way we do it, is counterproductive to this goal. Throw examples of obviously stupid false beliefs at someone, and they start thinking all false beliefs are obvious. Give too many examples of false beliefs that aren’t tempting to them, and they start believing they’re immune to temptation.

And it raises sloppiness to a virtue.

Take homeopathy. I can’t even count the number of times I’ve heard people say: “Homeopaths don’t realize beliefs require evidence. No study anywhere has ever found homeopathy to be effective!”

But of course dozens of studies have found homeopathy to be effective.

“Well, sure, but they weren’t double-blind! What you don’t realize is that there can be placebo effects from…”

But of course many of these studies have been large double-blinded randomized controlled trials, or even meta-analyses of such.

“Okay, but not published in reputable journals.”

Is The Lancet reputable enough for you?

“But homeopaths don’t even realize that many of their concoctions don’t contain even a single molecule of active substance!”

But of course almost all homeopaths realize this and their proposed mechanism for homeopathic effects not only survives this criticism but relies upon it.

“But all doctors and biologists agree that homeopathy doesn’t work!”

Have you ever spent the five seconds it would take to look up a survey of what percent of doctors and biologists believe homeopathy doesn’t work? Or are you just assuming that’s true because someone on your side told you so and it seems right?

I am of course being mean here. Being open-minded to homeopaths – reading all the research carefully, seeking out their own writings so you don’t accidentally straw-man them, double-checking all of your seemingly “obvious” assumptions – would be a waste of your time.

And someone who demands that you be open-minded about homeopathy would not be your friend. They would probably be a shill for homeopathy and best ignored.

But this is exactly the problem!

The more we concentrate on homeopathy, and moon hoaxes, and creationism – the more people who have never felt any temptation towards these beliefs go through the motions of “debunk”-ing them a hundred times to one another for fun – the more we are driving home the message that these are a representative sample of the kinds of problems we face.

And the more we do that, the more we are training people to make the correct approach to homeopathy – ignoring poor research and straw men on your own side while being very suspicious of anyone who tells us to be careful – their standard approach to any controversy.

And then we get people believing all sorts of shoddy research – because after all, the world is divided between things like homeopathy that Have Never Been Supported By Any Evidence Ever, and things like conventional medicine that Have Studies In Real Journals And Are Pushed By Real Scientists.

Or losing all subtlety and moderation in their political beliefs, never questioning their own side’s claims, because the world is divided between People Like Me Who Know The Right Answer, and Shills For The Other Side Who Tell Me To Be Open-Minded As Part Of A Trap.

This post was partly inspired by Gruntled and Hinged’s You Probably Don’t Want Peer-Reviewed Evidence For God. But there’s another G&H post that got me thinking even more.

Inoculation is when you use a weak pathogen like cowpox to build immunity against a stronger pathogen like smallpox. The inoculation effect in psychology is when a person, upon being presented with several weak arguments against a proposition, becomes immune to stronger arguments against the same position.

Tell a religious person that Christianity is false because Jesus is just a blatant ripoff of the warrior-god Mithras and they’ll open up a Near Eastern history book, notice that’s not true at all, and then be that much more skeptical of the next argument against their faith. “Oh, atheists. Those are those people who think stupid things like Jesus = Mithras. I already figured out they’re not worth taking seriously.” Except on a deeper level that precedes and is immune to conscious thought.

So we take the intelligent Internet-reading public, and we throw a bunch of incredibly dumb theories at them – moon-hoaxism, homeopathy, creationism, anti-vaxxing, lizard people, that one guy who thought the rapture would come a couple years ago, whatever. And they are easily debunked, and the stuff you and all your friends believed was obviously true is, in fact, obviously true, and any time you spent investigating whether you were wrong is time you wasted.

And I worry that we are vaccinating people against reading the research for themselves instead of trusting smarmy bloggers who talk about how stupid the other side is.

That we are vaccinating people against thinking there might be important truths on both sides of an issue.

That we are vaccinating people against understanding how “scientific evidence” is a really complicated concept, and that many things that are in peer-reviewed journals will later turn out to be wrong.

That we are vaccinating people against the idea that many theories they find absurd or repugnant at first will later turn out to be true, because nature doesn’t respect our feelings.

That we are vaccinating people against doubt.

And maybe this is partly good. It’s probably a good idea to trust your doctor and also a good idea to trust your climatologist, and rare is the field where I would feel comfortable challenging expert consensus completely.

But there’s also this problem of hundreds of different religions and political ideologies, and most people are born into ones that are at least somewhat wrong. That makes this capacity for real doubt – doubting something even though all your family and friends is telling you it’s obviously true and you must be an idiot to question it at all – a tremendously important skill. It’s especially important for the couple of rare individuals who will be in a position to cause a paradigm shift in a science by doubting one of its fundamental assumptions.

I don’t think that reading about lizard people or creationism will affect people’s ability to distinguish between, let’s say, cyclic universe theory versus multiverse theory, or other equally dispassionate debates.

But if ever you ever need to have a true crisis of faith, then any time you spend thinking about homeopathy and moon hoaxes beyond the negligible effect they have on your life will be time spent learning exactly the wrong mental habits.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

123 Responses to The Cowpox of Doubt

  1. Kevin Graham says:

    This reminds me of the wage gap debate because most arguments made online about how the wage gap is real are extremely weak.

    Example: “You don’t believe in the wage gap? FEDORA! You’re wrong because FEDORA!”

    And then when skeptics hear about it again they say: “Oh the wage gap? You mean the thing people are convinced exist because of funny looking hats? Totally fake.”

    But good arguments that the gap is at least partially caused by discrimination totally exist.

  2. Rm says:

    Meh, ‘self’ is a pretty widely used verb – it means ‘to fertilize one’s own self’, and is an antonym of ‘cross’ (cross-fertilize).

  3. Pingback: The "Flying Car Fallacy" and Why It's Wrong - Stratexist

  4. b person says:

    Nice read. I would say, belief is something you choose for yourself. and it doesn’t really matter what that belief is, as long as you know that it’s just your belief, and it doesn’t mean anything else than that. Beliefs have nothing to do with science. science is about technology and progress, belief (for me) is about comfort and stress relief, maybe even balance. the problem for me is that people now hold their beliefs as science, and their science as beliefs.

  5. Pingback: Maybe You’re Wrong (Or: Exercises in Humility) | Andrew Glidden

  6. Alexander Stanislaw says:

    Naming a common enemy may be bad epistemic hygiene but it is good politics/community building, which I suppose is why it frustrates you.

  7. Pingback: Someone Writes An Anti-Racist FAQ | Slate Star Codex

  8. Pingback: Linkblogging for 17/04/14 | Sci-Ence! Justice Leak!

  9. Pingback: The Idle Pleasures of Opposition… | Back Towards the Locus

  10. The Lancet article does not support homeopathy, unless you and I live in separate states of knowledge. The best it says it might have an effect beyond placebo, but it clearly states it has no clinical effect. It’s an amateurish mistake to ignore clinical effects in studies of any “medical,” and homeopathy is about as medical as waving a wand, effect.

    Of course, just to go all strawman, the Lancet also in 1998 published the article by the fraud and defrocked physician, Andy Wakefield, though they subsequently retracted it. Many of us in the “inoculation” world (honestly, I’ve read hundreds of articles on vaccines, and inoculation may have been used once) think that the Lancet has some serious explaining to do about its quality of peer review.

    A rational scientific skeptic examines the quantity and quality of evidence before walking to a conclusion. And we don’t cherry pick. The vast majority of evidence shows that inoculations work, and homeopathy is simply water.

    And if someone provided peer-reviewed evidence for the existence of a god, I’d read it. But I’d wait for the systematic review of the evidence. The fact is that every single magical medical procedures, whether homeopathy or acupuncture or crystal healing, depends on supernatural events. So most real skeptics deal with this crap all the time. If someone brought a PNAS article that said “god exists”, it would require the same suspension of rationality and scientific principle as homeopathy. It would be laughable.

    If this is the quality of your scientific thinking? Well, you really need some upgrading.

    • Scott Alexander says:

      Sure, when what you care about is whether a particular medicine is good for treating patients, clinical significance trumps statistical significance.

      But if it were discovered that homeopathy really did work and water had memory and solutions without a single atom of active ingredient could exert specific effects and all that – but it just wasn’t enough to make a dent in most diseases most of the time – do you really think that would be a resounding victory for the skeptical community?

      If it were discovered that homeopathy had a real effect, but it wasn’t clinically significant, that would still be among the most astounding discoveries in human history and force us to reassess everything we think we know about the world.

      Other than that, you’re agreeing with me, then insulting me for it.

      • Ken Arromdee says:

        There are reasons why we should be particularly skeptical of results that are statistically significant but marginal. Such results are highly likely to be affected by small but nonrandom effects that have nothing to do with what is being studied. In some cases the results may be statistical flukes.

        To be credible, such results need to be reproduced many times, and they should not remain marginal as more studies are done. Otherwise they are at best signs of pathological science.

        If it were discovered that homeopathy had a real effect, but it wasn’t clinically significant, that would still be among the most astounding discoveries in human history and force us to reassess everything we think we know about the world.

        We don’t ignore small effects because the discovery would be useless. We ignore them because effects that are statistically significant but small are most likely not caused by the effect being tested at all, and deserve to be ignored.

  11. Scott Alexander says:

    “A reason you might have a hard time discouraging this, since this this forms the social bedrock on the New Atheists and skeptics movements. Yes of course members do other things, but the tribes would not exist without it. The Overcoming Bias/LW crowd was at one point strongly distinct from it but it is generally converging, probably under the inevitable onslaught of eternal September and memetic selection. HPMOR does no favors on that process either.”

    I sort of worry about this, but I have not seen a lot of creationism/moon-hoax/etc style debate on Less Wrong or peripheral rationalist areas.

  12. ElrondHubbard says:

    So, I’ve been trying to find a place to say this for like two months, and this seems as good a place as any.

    I had been on Less Wrong for around three years before I actually bothered to read any of the sidebar blogs, including this one. And I had made the assumption, for some reason, that everybody on Less Wrong felt the same way about politics as I do – that politics is mindkilling, and that the Less Wrong taboo against politics is a good thing and minimizing the stain of politics in one’s personal life is probably a good habit to keep.

    Then I eventually followed the links to the sidebar blogs, and boy was I surprised. Many of these inspiring Less Wrong beacons of rationalist virtue whose posts I had been reading for years, who I knew very well had been reading the same things I had been reading about the importance of questioning one’s beliefs, and having a small identity, and not espousing conjunctively complex hypotheses, and saying “oops” – updating on evidence, and not writing your bottom line first, and Jesus do I need to just repeat the whole God damn Sequences here …

    Many of these people had Twitter accounts and blogs indistinguishable, epistemically, from pretty much any political other blog on the Internet. Except, I guess, they used words like “epistemically.”

    It was shocking. It is shocking. I am still in a state of shock, and am currently (currently as in, at this moment) torn between an intense alienation from this “rationalist” community and a desire to take an active role at or near its obviously vacant helm.

    I have chosen to dump all this navel-gazing here because, yeah, it looks to me like a whole bunch of folks read the Less Wrong Sequences and somehow, goodness help us, took basically whatever they believed at the time and doubled down on it to generate a superpowered, even more obviously-wrong-to-everyone-but-themselves version of it. Against my greatest hopes, in a great many smart people, it somehow did serve as an inoculation against the very things it was supposed to be promoting.

    That is just depressing as hell.

    • Anonymous says:

      I summon Michael Vassar to deliver the perfect one-liner given that prompt!

      Please?

    • St. Rev says:

      老僧三十年前未參禪時、見山是山、見水是水、及至後夾親見知識、有箇入處、見山不是山、見水不是水、而今得箇體歇處、依然見山秪是山、見水秪是水

      “Before I had studied Zen for thirty years, I saw mountains as mountains, and waters as waters. When I arrived at a more intimate knowledge, I came to the point where I saw that mountains are not mountains, and waters are not waters. But now that I have got its very substance I am at rest. For it’s just that I see mountains once again as mountains, and waters once again as waters.”

      • ElrondHubbard says:

        Yes, okay, I do not actually advocate not having politics in private life, although I wish we could all agree on a Nash equilibrium where we don’t talk about politics in private life. Regardless, a bad argument is a bad argument. A rationalist who uses a bad argument is either (i) failing at their art or (ii) being deceptive, intentionally using a bad argument as a soldier in a war-of-ideas. I mean, I can actually get behind the latter, but I actually suspect the former is what’s happening.

        • St. Rev says:

          I do not actually advocate not having politics in private life

          I do!

        • ElrondHubbard says:

          I used to think I did until I caught myself having opinions about totally innocent, definitely-not-political things that other people disagreed with me about and those disagreements happened to be things that people talk about in election speeches.

        • no says:

          For those of us who have blogs that run on daily [or even semi-daily] schedules, best-practice rationality does not mix well with daily schedules unless you have both free time and {superhuman focus|a reliable source of speed}.

          And every once in a while (this has only happened to me once, however, and it ran elsewhere than my blog), word comes down from on high that an article is needed that makes the best case possible for X, whether or not X is actually right.

    • blacktrance says:

      “Politics is the mindkiller” is something to be cautious of, it doesn’t mean we should stay away from politics altogether. Talking about political topics is not mutually exclusive with having a small identity, questioning one’s beliefs, saying “oops”, etc. I personally have said “oops” about political topics, and I’m sure others have as well.

      As I understand it, the LessWrong taboo against politics is not meant to be a universal taboo, but more of a taboo on the site (and even there, not an absolute one). The taboo there is useful because people disagree about politics (often strongly) and using topics from modern politics as examples of rationality or irrationality would be detrimental to teaching people how to be rational if they disagree with the example, e.g. saying something like “Republicans believe X, which is a mistake, and they make this mistake because of Rationality Flaw R” is not the best way to teach what Rationality Flaw R is because it’s highly likely that someone who believes X will not agree that it’s a mistake, and your current goal is to teach them about R, not convince them that X is wrong.

    • ozymandias says:

      I think it is generally bad form to say that everyone has dumb and irrational beliefs that are obviously wrong to everyone but them (except, implicitly, you, since you’re the one who can recognize everyone’s dumbness and irrationality and obvious wrongness) without laying forth your beliefs for examination. It is very easy to signal being the Rationalist Master when you aren’t making any object-level arguments people can disagree with.

      Nut up or shut up.

      • blacktrance says:

        Two attempts to steelman this position:

        1. The point is not that LWers who engage in politics have irrational beliefs, but that their beliefs seem indistinguishable from the beliefs of non-LW people of the same position, i.e. progressive LWers sound the same as progressive non-LWers, libertarian LWers sound the same as libertarian non-LWers, and so on. One would expect that the lessons of the Sequences, which LWers appear to apply to non-political topics, would be applied to politics as well, and this would make their views sound different from non-LWers with the same views, but this doesn’t seem to be happening.

        2. LWers know that politics is the mindkiller, and so they have a good reason to stay away from politics altogether, lest they be mindkilled themselves. The fact that so many LWers are engaged in politics is evidence that they haven’t fully absorbed this lesson, and instead of using the lessons of the sequences to make themselves more accurate epistemologically, are just using them to justify their position and beat up on their opponents more effectively. It is not any of their political positions that is irrational, what’s irrational is their decision to participate in politics in the first place.

        • anon says:

          2 is interesting. I kind of like the idea of intentionally abstaining from politics. Rather than attempting to engage with politics cautiously it might be best to back away entirely. I think I’ll try to do that a bit more, if not completely.

    • Scott Alexander says:

      Well, a couple of things.

      I don’t think they’re “indistinguishable from any other political blog on the Internet”. In fact, I can’t think of any LW-linked blog that fits this description. More Right is the only purely political blog that comes to mind, and it seems…kind of totally unique. Like, I have a lot of criticisms of it. But “similar to other things” is not one of them.

      But leaving uniqueness aside to focus on reasonableness, I have been pretty impressed overall with the standard of political discourse on most LW-linked blogs. Maybe I have low standards.

      That leaves the problem of that people discuss politics at all. I feel really guilty about this. I specifically space out political posts and put a couple of scientific/philosophical posts in between them so I don’t end up in a spiral of shame and self-hatred. My main justification is that it gets me praise and links and blog views, and blogging is a really operant-conditioning reward-driven enterprise in a way you probably don’t appreciate if you haven’t tried it.

      My excuse is that this blog is not my place where I try to do good things and change the world. My working and making money and donating it to charity is my place where I try to do good things and change the world. This blog is my hobby and guilty pleasure. I don’t think discussing politics is morally worse than discussing baseball, as long as you make an effort not to lower the discourse and as long as you don’t delude yourself into thinking you’re doing something better than discussing baseball and excuse yourself of doing any other good deeds because of it.

      I would support someone with ridiculously high standards trying to seize the helm of the rationalist community.

      I would also be interested to know what blogs and posts you thought were so low-standard. You can email me at scott [at] shireroth [dot] org if you don’t want to shame anyone in public (you can shame me if you want, it won’t be worse than I shame myself.)

      • Eli says:

        I would support someone with ridiculously high standards trying to seize the helm of the rationalist community.
        I was quite under the impression we already have an unofficial leadership. I quite like it this way, following the leaders we emphatically don’t have.

  13. St. Rev says:

    I’ve alluded to this above, and in the past, but let me be explicit here: Science fans are wrong about homeopathy.

    Check this stuff out: http://dailymed.nlm.nih.gov/dailymed/drugInfo.cfm?id=35576 Sinus Buster is, by all reports, fantastically effective against sinus headaches and congestion. It’s also homeopathic…kind of.

    Legally, anyway. Look at the label. The main active ingredient is a 4x/5x suspension of capsicum annuum–i.e. it’s a dilute preparation of pepper spray.

    Note, first off, that the ritual “lol, not one molecule remains!” argument doesn’t apply at all. Sinus Buster hurts. There’s plenty of capsaicin in there. But it does apply the central homeopathic principle: “like cures like”! Sinus Buster uses a painful, irritating substance to treat sinus irritation and pain.

    Sinus Buster aside: as I joked above, vaccination actually is an important, valid, working implementation of homeopathic ideas. A vaccine uses a highly weakened preparation of an infectious agent to prime the immune system against future attack. “Like cures like”…or anyway prevents it. It’s ironic, then, that the science fans who mock homeopathy are often the same science fans who mock antivaxers.

    Yes, modern homeopathy is 98% useless and 99% wrong. It floundered badly on the doctrine of “proving”, the idea that repeated dilution magnified the effect indefinitely. But the core idea of “like cures like”–in modern terms, more like “given agents X and Y that produce similar symptoms, a microdose of X may trigger an immune response that effectively treats Y”–has valid, real applications, and may have others that allopathic medicine has missed.

    • St. Rev says:

      Having said all that, I think people need good semantic stopsigns; it seems unlikely to me that any neglected value in homeopathic treatment could counterbalance the health and wealth that’s currently needlessly squandered on it. So the science fans are right to be wrong, and the previous comment is an extravagant display of contrarian signalling.

    • Randy M says:

      I’m trying to come up with the principle that differentiates homepatic like from vaccination (or even natural immunity). Like only cures like because of the antibodies generated by the antigen being able to be remembered by the body (binding to DNA activator sites or such).
      So if the toxins etc. that homeopathy claims to cure are antigens, it would be plausible. If toxins work a different way than pathogens–or rather, the response is different enough–then like cures like wouldn’t have any reason to be applicable (but may be, in the end, due to unknown processes if empiracally indicated).

      • Ialdabaoth says:

        I’m trying to come up with the principle that differentiates homepatic like from vaccination (or even natural immunity). Like only cures like because of the antibodies generated by the antigen being able to be remembered by the body (binding to DNA activator sites or such).

        The problem here is that “homeopathy”, strictly speaking, is a particular idea for how to design new medicines. “Antibodies generated by an antigen are remembered by the body” is an explanation for WHY a homeopathic technique might work. In the same sense that Darwin’s theory of natural selection came before genes were discovered, homeopathic theory came before antibodies were discovered.

        That said, when we hear the term “homeopathy”, we tend to think of a particular cargo-cult implementation of it, which dilutes particular substances (chosen for hysterical^H^H^H^H^H^H^H^Hhistorical reasons) until the likelihood of a single molecule remaining in the solution is effectively zero. But those are “homeopathy” in the same way that Nazi Germany’s racial purity laws are “natural selection”.

        • St. Rev says:

          *touches nose, points to Ialdabaoth*

        • Randy M says:

          “Eye of Newt” is also a theory of how do design medicines; unless you also have a theory for how they will work that jives with your theories of how the body operates, it’s meaningless to devise a theory on making medicines alone.

        • St. Rev says:

          Randy M: Vaccines and homeopathy both emerged from a few empirical observations and some extremely wooly theorizing. Jenner published on cowpox vaccination for smallpox in 1798; Hahnemann invented homeopathy in 1796. The mechanism for vaccination wasn’t elucidated until much later: the germ theory of disease only took off in the mid to late 19th century, and detailed understanding of immune function came long after that. Until fairly recently (say post-WWII), most drugs were discovered by synthesizing new chemicals and throwing them at test subjects willy-nilly. Even today, rational drug design is at best a strategy for weeding out the drug design space from “mind-blowingly astronomical” to “gargantuan”, and often (at least in psychopharmacology) the theory of how they will work turns out to have almost nothing to do with how they do work.

          It’s problematic (that is, it creates problems) to project backward anachronistic frames on historical discoveries. At the time they were developing, vaccination and homeopathy were very similar ideas–both derive from Paracelsus’ maxim “what makes a man ill also cures him.”

          Homeopathy as a system ran aground on the idea of “succussion”–the astronomical dilutions that skeptics use to bingo-card the whole subject. Yes, 15x, 20x, 30x preparations are just expensive magic water. But 3x, 4x, 5x? Some of those might work.

          A steelmanned homeopathy would be a hypothesis about nonlinearity in drug effect curves, and another about immune signaling. And we have tons of real data there.

          Boo-light time, not that anyone’s still reading: actually existing homeopathic “remedies” are bullshit and I’d never waste money on them. But I do want to try Sinus Buster someday (I can’t find it in drugstores around here.)

        • At the time they were developing, vaccination and homeopathy were very similar ideas–both derive from Paracelsus’ maxim “what makes a man ill also cures him.”

          I don’t know about vaccination in general, but Jenner’s original discovery had a good mechanistic theory that seems unrelated. Namely, milkmaids who got cowpox didn’t get smallpox (a different disease). He didn’t vaccinate people with small amounts of smallpox, but with cowpox.

        • St. Rev says:

          David: Good point, that goes a long way to undermine my argument! Variolation (inoculation with smallpox scabs, basically) was in use long before that, but vaccination was the big breakthrough.

          Hahnemann seems to have been inspired by a somewhat analogous ‘breakthrough’, studying the use of cinchona bark to treat malaria, but there are claims that he fudged those results. Cinchona bark turns out to work against malaria because it contains quinine, ofc.

      • St. Rev says:

        Another example, relevant here: allergy shots.

      • Scott Alexander says:

        I don’t know much about homeopathy, but to me the distinguishing principle is that vaccination says “use the small thing before you get sick” and homeopathy says “use it after you get sick”.

        Also the ability of vaccination people to posit a mechanism, only use things that work via this mechanism, and test whether they indeed work.

  14. Shmi Nux says:

    TL;DR: Mathew 7.3 on reflective consistency: “Why do you look at the speck of sawdust in your brother’s eye and pay no attention to the log in your own eye?” (Disclaimer: This is in no way intended to defend or promote a particular religion or any religion at all.)

  15. Pingback: The Epistemological Peril of Punching Down

  16. J. Quinton says:

    Granted, I only have one data point for this, but I’ve found it’s best to pique someone’s interest in rationality by first letting them experience the feeling of certainty — actually noticing it and being conscious of what it feels like — and then pointing out that their certainty was misplaced. That was done by using the 2-4-6 task. It seemed that seeing irrationality in oneself was better than pointing out the obvious irrationality in someone else (e.g. homeopathy).

    • grendelkhan says:

      For me, it was the Allais paradox as presented on Less Wrong. I could see what was going on, I could understand the math, and still my intuitions insisted on being wrong. It seriously shook me up.

      • Tab Atkins says:

        But… the Allais paradox isn’t a paradox. You’re not wrong for wanting to choose both 1A and 2B. You’re just a human with a normal non-linear utility function, rather than the linear one that most naive EV calculations assume.

        The key is in the sentence in the wiki that says “expected utility theory, equal outcomes added to each of the two choices should have no effect on the relative desirability of one gamble over the other”. That’s only true if you have a linear utility function (which no sane person does). It’s actually that the equal outcomes between the gambles are taken for granted as definitely occuring.

        Once you’ve taken for granted that you’ll get $1M (in Experiment 1), Gamble 1B’s reward structure changes. Rather than being $0/$5M, it’s $-1M/$4M. That 1% chance of loss isn’t just staying normal, it’s giving back the $1M you’ve already mentally claimed, and adjusted your utility for. For nearly all people, going from “has $1M” to “has $0” is a far greater drop than the improvement from gaining an additional four million, so Gamble 1A is the right choice.

        Experiment 2 lines up more with the wiki’s summary. Since the shared outcome is $0, that’s what gets taken for granted – nothing will happen by default, and you’re gambling on something happening. Then the two gambles are more equitable – a 100% chance of $1M, or a 9% chance of nothing and a 91% chance of $5M. It’s now reasonable for people to choose Gamble 2B.

        You probably could hack people’s preferences here – rather than offering Gamble 2B, just offer Gamble 2A, then for the winners, offer them a quintuple-or-nothing bet on 1:10 odds. Leaving the bet leaves you with 2A’s payment structure, taking it gives you 2B’s structure. But I’d bet more people would leave the bet in this situation than would take 2A when presented the two gambles up-front.

        Always remember that linear-utility EV maximizers are insane, and not a model of human behavior, or even rational behavior. They’ll consider Martingale betting a great deal, and pass up a guaranteed million for a 51% chance of $100M and a 49% chance of $-98M.

        • Douglas Knight says:

          Usually when people say that utility isn’t linear, they mean it isn’t linear in money. You appear to be saying this, but it is completely irrelevant to the Allais paradox. What is relevant to the Allais paradox is that utility should be linear in probability. Humans fail at this and they are wrong. “Added” does not mean added money, but added possibilities: situation 1 is a choice between A and B. Situation 2 is we flip a coin, and if it comes up heads, you get no input in the decision, where as if it comes up tails, you get put in situation 1, except that you have to make the decision ahead of time. The answer being different in 1 and 2, that is, before and after seeing the coin, is failure of reflective consistency.

          (Actually, my situation 1 doesn’t come up. The real version is two versions of 2, one in which the heads result is money and one where it is no money; probably because people vary in their response to my situation 1, but the two versions of 2 produce the same results.)

        • Sniffnoy says:

          Douglas Knight has already made the basic point I wanted to, but I wanted to expand on it a bit more.

          In short, you are pretty mixed-up about what people are talking about when they talk about utility functions and linearity. It is indeed the case, as you say, that utility functions are not required to be linear in money, nor in quantity of any other good; and the direction and the degree to which it deviates from linearity determines to what extent you are risk-averse or risk-seeking.

          Utility functions are, however, required to be “linear in probability” — except that really, the preceding statement is actually an abuse of notation, as I’ll get to in a moment. Otherwise you don’t have a utility function; you just have a preference (pre-)ordering. (Some people do use the phrase “utility function” in this way, but seeing as this does not seem to be the sense under discussion, I’m just going to ignore it. Also, to me it seems silly to introduce a function to the real numbers when all you wanted to talk about was a preordering.)

          The Allais paradox is a violation of “linearity in probability”; it does not in any way rely on linearity in money. Your comments about the latter are basically irrelevant.

          (Although, I must disagree with your final comment that a utility function which is linear in money is insane and irrational — by what means are you judging another agent’s terminal values? I mean, sure, it seems nutty for a person to build such an agent, and I’d certainly question its builder’s rationality, but I’m not sure it’s meaningful to state that the agent itself is “irrational”.)

          But back to the point — I have so far not justified why utility should be “linear in probability”, but in the case of the Allais paradox, where the independence assumption of the VNM theorem is violated, the problem can be seen via a Dutch Book argument.

          And indeed I’m not going to give any justification for this beyond A. pointing out the VNM theorem and, B. making some comments on why (in my opinion) the VNM axioms make sense. (At least, they make sense if you already accept the notion of measuring strength-of-belief with probability; otherwise you need something like Savage’s Theorem. But you seem to already be assuming that, so I’ll use the VNM framework.)

          Well, OK — you haven’t done anything to challenge totality, transitivity, or the Archimedean property, so let’s just talk about independence. But if you accept totality and transitivity, then independence becomes easy to justify, because of exactly the sort of Dutch book argument Eliezer makes in his post on it! So, really, I don’t know where you’re coming from here.

          Now I should note here that I’ve really basically been counterarguing rather than refuting; I haven’t actually addressed your argument that this choice is consistent. But you haven’t really given one — you haven’t actually given an example of a utility function that will make this consistent; your calculations are just of the form “well, it could turn out this way”. But as has already been noted, because this violates the independence axiom, there can be no such utility function — at least, not with the definition of “utility function” that’s actually under discussion.

          Finally, a note on the “abuse of notation” issue I’ve been mentioned — properly speaking, a utility function is a function (call it f) from consequences to real numbers, not a function (call it F) from gambles to real numbers. Obviously one can induce F from f (by making F “linear in probability”, though “affine” might be a better word than “linear”). But what makes f a utility function is that the F determined in this way actually reflects the agent’s preferences. If there were not required to be any relation between the f describing the agent’s preferences among outcomes and the F describing the agent’s preferences among gambles, what would be the point of talking about f? It would be useless except under conditions of certainty. (And why would we use functions to the real numbers rather than just pre-orderings?) Of course, by the VNM theorem, if our agent satsifies the appropriate axioms, then f is not useless because it in fact determines F entirely, obviating any need for talking about F as a separate object — hence why the simpler object f, not its extension F, is referred to as the “utility function”. (Although the abuse of notation is still perfectly understandable obviously.)

  17. ChristianKl says:

    HPMOR starts with Harry father having the wrong attitude about not believing in evidence and him being proven wrong.

    I think you underrate the “this marginal weird things is important enough to be loud about”. There one obvious choice, so I think I know what you are pointing to.

    At the Berlin LW community weekend I was talking to someone pushing the new weird thing. Spending a lot of energy into pushing the idea into the public. I was surprised to hear him talk about how it’s annoying to deal with atheists.

    The new thing allows people to feel like they belong to a course that’s better then New Atheism.

    • Pawel Aleksander Fedorynski says:

      I think you underrate the “this marginal weird things is important enough to be loud about”. There one obvious choice, so I think I know what you are pointing to.

      This got me curious, since I read Less Wrong casually but I have no idea what you guys are talking about.

  18. tjic says:

    This is an excellent post. I’m a conservative Catholic (and a Less Wrong reader, and a rationalist in 99.9% of matters), and the thing that turns me off to most skeptics is that they’re so stupid about the way they go about their arguments. Most are not, in fact, true skeptics, but just parroters of their own received opinions. You aren’t, and this is why you’re one of my favorite skeptics.

  19. Neel says:

    How would you suggest trying to explain rationality to people?

    • Shmi Nux says:

      Start by telling a story or two how learning “rationality” saved your own personal ass from succumbing to known fallacies and making your life worse. If it hasn’t, you are not qualified to explain rationality.

    • Eli says:

      You probably can’t until you get the target in a position where they actually care about accomplishing some goal thoroughly enough to question their beliefs and win instead of doing the same as ever and failing.

  20. Tenoke says:

    So, what should I use as an example of a popular belief that is obviously incorrect to my listener but feels correct to large group of people [that the listener is aware of]? Those seem to do the job pretty well and I don’t see what alternatives you provide (unless you also think that we shouldn’t try to convince people that false beliefs don’t feel false from the inside – then I see a bigger problem)

    Also, yes, I am ignoring all the straw-manning – e.g. ‘No study anywhere has ever found homeopathy to be effective!’ since sure, I agree that saying things like that is harmful but does your target audience really say those stuff (does your friend explaining rationality say that?)

    And to all the people that might link me to this post in the future simply because I’ve mentioned religion or homeopathy in a conversation, while ‘Yvain has shown the mention of those to be harmful’: Argghh

    PS: 1. People focus too much energy on debunking the same old claim, yes. 2.The argument that false beliefs are prevalent and not obvious from the inside *merely* relies on the knowledge of said debunking, however, makes a different point.
    3. I don’t see why Scott decides to attack 1 through 2

    • ozymandias says:

      I would suggest you should look at wrong beliefs that the person themself has had in the past and tailor your rhetorical statements to that (for instance: religious beliefs, political beliefs, mental-illness-prompted beliefs, bad choices of career or romantic partner, etc.). This seems more likely to prove the thing you want it to prove– not “some people need rationality, but you are a sensible person who doesn’t believe those silly things!” but “remember the time you believed that Jesus died on the cross for our sins? How do you know you’re not making a mistake equally large *now*?”

      • Tenoke says:

        Yes, I admit that tailoring your response to the person’s history would be optimal. Sadly, most of the time you don’t know the person who you talk to that well (or you might talk to a group of people), so defaulting to examples that are likely to be understood still seems to be the best option [that I can think of].

        Additionally, there is a big difference between the initially criticized example – “It’s important to have correct beliefs. You might think this is obvious, but think about creationists and homeopaths and people who think the moon landing was a hoax.” and “some people need rationality, but you are a sensible person who doesn’t believe those silly things!”. The former does make a point, while the latter is indeed othering. If anything, the former is used to imply that you are not so sensible but actually more like *them* than you think [and thus need to learn tools to become less wrong]

    • Jai says:

      Widely-believed things that your listener knows are false? Many of the obvious things this post warns against harping on qualify, right? Creationism is a pretty widespread belief; In the realm of ethics, that it is a good idea to deny people certain rights based on pigmentation was an extremely common belief until recently, and denying people rights based on sexual orientation is still pretty common. But re-re-re-redebunking these sorts of things is probably a bad idea, for the reasons discussed in the post.

      So, I think maybe you want beliefs whose appeal is viscerally apparent to the listener, even as they have the evidence, means, and will to firmly establish their falsehood. And for this I recommend The Fireplace Delusion.

      • Tenoke says:

        Sure, I’ll use this example if I meet people who have [had] strong beliefs that fires are healthy (or at least think that this is a common belief, which would be hard to discern without talking about this specifically).

        • Jai says:

          I think the target belief here is “being near fires does not have any appreciable negative effects.” I’m pretty sure this is a very widespread belief.

        • Tenoke says:

          Yes, but isn’t that mainly because people have not thought about it much nor heard any evidence of the dangers of fire?

          I don’t imagine that many people hold strong beliefs about this and would actively distrust the evidence of the dangers when presented with it. If anything it looks to me like using this example exasperates the problem where people “start thinking all false beliefs are obvious”.

        • Anonymous says:

          >I don’t imagine that many people hold strong beliefs about this and would actively distrust the evidence of the dangers when presented with it.

          I suspect that people who frequently have fires would react less than favorably to this.

      • >Creationism is a pretty widespread belief; In the realm of ethics, that it is a good idea to deny people certain rights based on pigmentation was an extremely common belief until recently, and denying people rights based on sexual orientation is still pretty common.

        Be careful with that. One of those things is not like the others.

        To be clear, the odd one is Creationism. It’s a belief about how the world works, and it’s very wrong.

        The other two are what you might call social technology, which are judged on entirely different criteria. They are social constructs that assume facts about the real world try to produce some (hopefully prosocial) results. What would the world have to be like for those things (for concreteness, voting restricted to whites, and marriage restricted to male/female pairs) to meet the same standard as other accepted social technology?

        Are you sure you could debunk them if I presented an argument for them?

        Also, taboo “rights”, it’s generally a confusing concept.

    • ChristianKl says:

      Unfortunately the homeopathy example it’s not much of a strawman.

      If I discuss with someone whether or not antidepressants are effective and say that I would want to see a better evidence than the existing evidence for homeopathy and accurately describe the evidence for homeopathy the average New Atheist will be very surprised.

      The top rated comment of a question on homeopathy on skeptic stackoverflow for example is plainly wrong (http://skeptics.stackexchange.com/questions/2/does-water-have-a-memory-as-claimed-in-homeopathy).

      However, the duration of the water memory has been scientifically tested and shown to be very short (less than one billionth of a second). This means that the memory has gone by the time the patient even takes the dose.

      The fact that water stores some information for a few seconds and longer than a ms should be obvious for everyone who interacted with water. If I throw a stone into water than the water moves and it takes seconds till the water is still and lost the information.

      The person who wrote the post manages to extrapolate from the fact that a certain mechanism can’t store information to the fact that the whole system has no way of storing information. In the process he overlooks obvious effects.

      • There’s no information in water that has a stone hit it. It’s simply the conservation of energy, and application of the Newton’s Third Law.

        When you throw a rock into the water, the energy is transferred from you to the rock to the air and then to the water. It is not a memory of that rock that moves through time and space, it is energy.

        If you want water to remember something, then you need to rewrite several laws of physics and quantum mechanics.

        • AG says:

          When people talk about ‘information’ they don’t have to reject physicalism and postulate a separate kind of fundamental substance but they can understand information to be an arrangement of matter and energy. So there definitely is information in a body of water that a rock hit it. Otherwise how could I look at the waves and tell?

          I’m not sure if you disagree with that and think that in certain situations there actually is some thing called ‘information’ that exists additionally to the complete physical state of the system and what you’re saying here is that throwing a rock into water isn’t one of those magical occasions. Or maybe you do understand it but you’re assuming that people here don’t and you’re trying to explain, for some reason only focusing on the rock in the water and not mentioning that it’s a specific instance of a general principle.

          In any case, something went wrong here.

        • peterdjones says:

          To be precise, there’s no extra non-physical information.

    • Scott Alexander says:

      Yeah, you’re right, I came on too hard against this.

      I think it’s fine to use homeopathy to establish the proposition that some people believe stupid things, if you’re then going to do something useful with that proposition. I’ve probably done this myself. I guess this could be what my friend in the first paragraph was trying to do. If so, I was unfair to him.

    • >So, what should I use as an example of a popular belief that is obviously incorrect to my listener but feels correct to large group of people [that the listener is aware of]? Those seem to do the job pretty well and I don’t see what alternatives you provide (unless you also think that we shouldn’t try to convince people that false beliefs don’t feel false from the inside – then I see a bigger problem)

      Conservatism.

      Moldbug does this really well, IMO. He’s like “look at this 50% of america that is totally deluded and bonkers they must be infected by some kind of exotic memetic virus” but then instead of leaving it at that and acting smug about how much better us progressives are than those dumb conservatives, he immediately turns it around and says “but what if progressivism is a bonkers meme-virus as well?” and then goes on to spend the next hundred thousand words ripping your entire political philosophy a new asshole.

      One comes out of that with a new appreciation for being really careful about possible memetic viruses. I think that pattern could make a pretty good intro to rationality.

  21. BenSix says:

    The irony of Rational Wiki is that it is about an eighth as much of a reliable source as Wikipedia. I dropped by its entry on “first cause” arguments recently and its knockdown refutation is “what caused God”.

  22. Keith Berman says:

    This post was partly inspired by Gruntled and Hinged’s You Probably Don’t Want Peer-Reviewed Evidence For God (actually, I started writing it before that was published – but since Bem has published evidence showing psi exists, I must have just been precognitively inspired by it). But there’s another G&H post that retrocausally got me thinking even more.

    Precognitively and retrocausally have a similar Dickian ring. Hey, let’s take that liberty–although I might feel guilty later.

    PRECOGNITIVELY + RETROCAUSALLY –> RETROCOGNITIVELY + A + SUPER + ALLY +…C?

  23. ozymandias says:

    I’ve used RationalWiki to look up ideas I haven’t heard of before to see what the Consensus Hivemind of Internet Atheism thinks about them, and they’ve actually increased my likelihood of believing some altmedish things. For instance, they told me about the fact that acupuncture may actually have effects for pain relief, and it increased my likelihood of signing up for cryo– if the worst RationalWiki can find is “probably compatible with physics, serious long shot” that is a pretty good bet actually. So at least one person is using it to debug their thinking. 😛

  24. Francesco says:

    But of course dozens of studies have found homeopathy to be effective
    But of course many of these studies have been large double-blinded randomized controlled trials, or even meta-analyses of such.
    Is The Lancet reputable enough for you?
    But of course almost all homeopaths realize this and their proposed mechanism for homeopathic effects not only survives this criticism but relies upon it.

    I don’t understand and I would be very happy if someone were so kind as to explain it to me – if all these things are true, then how can homeopathy be placed in the group of opinions that would instantly qualify their holder as irrational, on par with belief that moon landings were a hoax?

    • Ken Arromdee says:

      Because someone claims there are no studies, and Scott says “sure there are”. But there aren’t any studies in the sense that people really mean when they say that. That is, there aren’t any good studies.

      Each step is like that. There are double-blind randomized studies–there just aren’t double-blind randomized studies with nothing else wrong with them. There is, literally speaking, a journal article; it’s just that nobody who asks for a journal article really means “one article in one journal, which found a tiny effect, where the same journal later published another article which says the effect is no better than placebo”.

      In short, the blogpost takes people’s demands for evidence out of context by interpreting them literally. Scott hasn’t discovered sloppy skeptics who don’t really care about evidence, he’s just discovered his own sloppy natural language parser.

      • ozymandias says:

        Except they clearly don’t mean “good studies” because they are perfectly willing to cite terrible, inaccurate studies that support views that are less low-status than homeopathy.

        • Ken Arromdee says:

          If you have to add the word “clearly” it probably isn’t.

          The fact that someone is willing to cite terrible studies for X doesn’t mean their standards for homeopathy or for most subjects are like their standards for X. It may just as well mean their standards are normally the ones for homeopathy and they are making exceptions for X.

          Some people have even weaker standards of evidence for other subjects than merely accepting bad studies–for instance, many religious scientists accept religions for absurd reasons. Yet we wouldn’t say “that scientist doesn’t mean ‘good evidence’ when talking about science, because he accepts mystical experiences during prayer as evidence for religion, and that’s obviously bad evidence”.

          Furthermore, “people who say they want studies don’t really mean good studies” is a claim that needs to be explicitly stated, and for which you must provide evidence. The original blog post doesn’t do that–it just assumes as undisputed fact, which it’s not.

          Also, the fact that it can be legitimately disputed means that people who dispute it, even if they are wrong, cannot be considered “sloppy” for doing so.

    • ChristianKl says:

      I don’t think it makes sense to put them into the same category. A Western government agency would never publically take the position that the moon landing were a hoax.

      North Korea might but Switzerland won’t. But when it comes to homeopathy the Swiss approved (http://rd.springer.com/book/10.1007/978-3-642-20638-2/page/1).

      The issue with homeopathy is mainly that it violates the reductionist world view. Whether or not the moon landing is fake has no consequences for the reductionist world view. It’s just that there no good reason to believe that the moon landing is fake.

    • Scott Alexander says:

      That’s a really tough question. There are those clever proofs that 1+1=3. The average person isn’t smart enough to find the flaws in those proofs. But the average person should be smart enough to know that 1+1=3. But I don’t know if this just means the average person uses the absurdity heuristic, which is known to be dangerous.

      I think it comes down to a balance of priors, trusting the scientific establishment, being able to determine that the studies supporting homeopathy are only a tiny fraction of the ones that don’t, and maybe if you’re lucky having some statistical intelligence you can use to dissect claims.

      • Francesco says:

        What kind of evidence would convince you or (a separate question) the average person in the rationalist community that homeopathy is true?
        Note that I have no axe to grind here – I’m not a supporter of homeopathy.

        • David Simon says:

          Well, if homeopathy is real, there should be an explanation for why non-crappy tests of homeopathy so far have been negative or negligible.

          If we had a blinded non-crappy test showing that homeopathy does actually work in the presence of some X-factor, which has been absent in prior tests, I’d probably find that at least interesting. For example, maybe it only works if everyone handling the medicine actually believes it works.

          It wouldn’t be that convincing to me though, until there was a lot of replication, and some confirmation of related effects (e.g. if X-factor long-term water memory is a thing, then we probably should see some non-medical effects as well, and can test for that).

        • nydwracu says:

          Well, sometimes homeopathy does work.

          I once had an eye doctor who recommended homeopathic eye drops. He gave me a sample. I looked at the ingredients, and it was… regular eye drop ingredients, 6x biobabble. The exact same stuff as real eye drops—but with better marketing, and maybe more of a placebo effect for people who believe in homeopathy. (Does the placebo effect work like that? Does its power increase with belief?)

      • Francesco says:

        Since you wrote:

        I am of course being mean here. Being open-minded to homeopaths – reading all the research carefully, seeking out their own writings so you don’t accidentally straw-man them, double-checking all of your seemingly “obvious” assumptions – would be a waste of your time.

        And someone who demands that you be open-minded about homeopathy would not be your friend. They would probably be a shill for homeopathy and best ignored.

        Here’s a better question than the one I asked previously: what kind of evidence would it take for you to simply be open minded about homeopathy?

        Please believe me, I’m not a “shill for homeopathy” since I’m not interested in it and in fact lean against it. I’m just trying to understand how you and rationalists view things.

        • Martin-2 says:

          “Here’s a better question… what kind of evidence would it take for you to simply be open minded about homeopathy?”

          I actually think your first question was better, since it translates more easily to probabilistic language. If you believe something is true that means you assign it a high probability. If you’re open-minded about something then that has more to do with how it makes you feel and want to spend your time, though if that’s what you’re interested in then cool. But hopefully the answer to both questions is the same: if Scott observes things that would be more likely in a universe where homeopathy worked than one where it didn’t work then he will assign a higher probability to the proposition “homeopathy works” in accordance with Bayes’ theorem. If this keeps happening then soon he’ll start telling people homeopathy deserves a chance, and eventually he’ll be a raving homeopath writing subversive popular books.

  25. Said Achmiz says:

    “If only there were irrational people somewhere, insidiously believing stupid things, and it were necessary only to separate them from the rest of us and mock them. But the line dividing rationality and irrationality cuts through the mind of every human being. And who is willing to mock a piece of his own mind?”

    (With apologies to Solzhenitsyn).

  26. Transhumanly intelligent being says:

    Wow, human, you sure sound confident you know what rationality is about.

    • Will Newsome says:

      For seriously though I can’t help but like a post that implicitly says ‘rah rah epistemic paranoia’. But I’d like it more if you also harped along the lines of ‘rah rah moral paranoia’ and ‘rah rah not rah rahing’ and generally anything that causes your readers to fear God.

  27. Levi Aul says:

    This seems like a really strong example of advice that needs to be reversed. The only people who will grasp at all what this post is saying, rather than wandering off into debating the surface issues, are way off to the right of the bell-curve on doubt–self-doubt especially. This might even be enough to send them into a spiral of self-doubt, asking whether they’re really justified in the confidence they hold in anything, including the principle of skepticism itself.

    Meanwhile, the people who need to have more doubt will never be shown anything like this article. They, indeed, won’t even be able to get past the examples of straw-man at the beginning of the article, because they’ll “smell” from that that this article was written from an implicit perspective of speaking to people who would Other them.

    • Anonymous says:

      I don’t think you got it. *Everyone* needs to have… maybe not *more* doubt, but *better* doubt.

    • St. Rev says:

      I will testify here that epistemic paranoia is exhausting and paralyzing–the computational overhead is oppressive, never mind the existential anxiety. The ability to sustain delusional confidence seems to be necessary for daily functioning in this culture.

    • grendelkhan says:

      Yeah; just as creationists, anti-vaxxers and the like aren’t really a problem in the RationalWiki circles, RationalWiki types aren’t really a problem in places like this.

    • Scott Alexander says:

      You’re probably right.

  28. von Kalifornen says:

    I would comment that this reaches a fever pitch when combined with the “who taught you critical thinking, anyway? problem.

    • Scott Alexander says:

      Just so other people get the pleasure of knowing what (I think) VK was talking about:

      MORPHEUS: For the longest time, I wouldn’t believe it. But then I saw the fields with my own eyes, watched them liquefy the dead so they could be fed intravenously to the living –

      NEO (politely): Excuse me, please.

      MORPHEUS: Yes, Neo?

      NEO: I’ve kept quiet for as long as I could, but I feel a certain need to speak up at this point. The human body is the most inefficient source of energy you could possibly imagine. The efficiency of a power plant at converting thermal energy into electricity decreases as you run the turbines at lower temperatures. If you had any sort of food humans could eat, it would be more efficient to burn it in a furnace than feed it to humans. And now you’re telling me that their food is the bodies of the dead, fed to the living? Haven’t you ever heard of the laws of thermodynamics?

      MORPHEUS: Where did you hear about the laws of thermodynamics, Neo?

      NEO: Anyone who’s made it past one science class in high school ought to know about the laws of thermodynamics!

      MORPHEUS: Where did you go to high school, Neo?

      (Pause.)

      NEO: …in the Matrix.

      MORPHEUS: The machines tell elegant lies.

      (Pause.)

      NEO (in a small voice): Could I please have a real physics textbook?

      MORPHEUS: There is no such thing, Neo. The universe doesn’t run on math.

  29. Ken Arromdee says:

    But of course dozens of studies have found homeopathy to be effective.

    This isn’t a sign of sloppiness on the part of homeopathy opponents; it’s a sign of not understanding how conversation works. In ordinary language, “there’s no evidence for” means “there’s no good evidence for”. A lot of your signs of “sloppiness” are just pointing out that what people say is not literally accurate. Of course it’s not. But if you think it’s supposed to be, you’re misunderstanding how human beings communicate.

    But of course almost all homeopaths realize this and their proposed mechanism for homeopathic effects not only survives this criticism but relies upon it

    If “homeopaths” refers to “the guys making the medicines”, this is probably true. If “homeopaths” refers to the guys buying it, it’s probably false. I suspect that the average person buying homeopathic medicine does not know that the concoctions contain no molecule of the substance, and if they did find out, would readily agree that that is indeed nonsensical.

    Have you ever spent the five seconds it would take to look up a survey of what percent of doctors and biologists believe homeopathy doesn’t work? Or are you just assuming that’s true because someone on your side told you so and it seems right?

    No, I’m assuming it’s true because it’s the nature of some claims that if they were true, we’d have heard about them. If any substantial number of biologists and doctors believed homeopathy works, you can bet that homeopaths would bring the statistic out every time homeopathy was questioned. They don’t. If any substantial number *started* believing that homeopathy works, it would make at least the front page of Slashdot. It hasn’t.

    • ChristianKl says:

      Most of the studies that investigated homeopathy found it to be effective. It’s only if you add quality standards to weed out low quality studies that you get the result that homeopathy doesn’t work.

      The evidence for homeopathy is good enough that a meta study commissioned by the Swiss health authorities found it to be a cost effective treatment.

      Are you aware that the Swiss report exists? If not what conclusions do you draw from that about being well informed about how many doctors believe in homeopathy?

      I suspect that the average person buying homeopathic medicine does not know that the concoctions contain no molecule of the substance, and if they did find out, would readily agree that that is indeed nonsensical.

      That seems to be a pretty strong claim. If it’s that easy to convince people that homeopathy doesn’t work, how many people did you manage to convince?

      • Ken Arromdee says:

        Are you aware that the Swiss report exists? If not what conclusions do you draw from that about being well informed about how many doctors believe in homeopathy?

        http://www.skepticalraptor.com/skepticalraptorblog.php/switzerland-endorse-homeopathy/

        • ChristianKl says:

          That doesn’t answer the question I asked whether you were aware of the existence of the document before I pointed to it.

      • Anthony says:

        In my mental model, homeopathy works because placebos work. Investigations have found over and over that placebo treatments are better than nothing, therefore it is not surprising that homeopathic treatments work as well.

        The interesting investigation would be three sets of double-blind treatments: one where the subjects were told that the investigation was into a new treatment for an ailment, and received an actual, effective drug, or a homeopathic remedy, and maybe a pure placebo (candy-coated citric acid with sugar, perhaps?); while the other sets were told that the investigation was into the effectiveness of homeopathic remedies, with one of those given some information about the theory and practice of preparation of homeopathic remedies, receiving the same two or three drugs. Perhaps if the complaint is mild enough, there could be another control group who was told by their doctor that the ailment would resolve itself in a few days without drugs.

        I don’t know if such a study exists, but I’d love to see one.

    • Scott Alexander says:

      I disagree with the thrust of your post.

      You can No True Scotsman your way to “no good studies show homeopathy exists” if you define “good study” as “one that produces the correct answer”.

      But where I was going with this is…well…we start with people in armchairs pontificating that it seems to them that X must be true, and other people in different armchairs equally certain X must be false.

      Science is supposed to be an improvement on that process – where no matter what your opinion, experiments produce brute facts and then you are forced to either accept them or be clearly revealed as a fraud.

      But the actual process is really messy. What really happens is that some studies show X, other studies show not X, and people sit in their armchairs pontificating about how only the studies that show X are any good, and other people sit in different armchairs equally certain that only the studies that showed not X were valid.

      A relativist would stop there and say science doesn’t work. I would be less pessimistic and say that first of all not all science is like that, and second of all disputes about which studies are correct tend to be much more resolvable than disputes about what’s true, especially since the worst case scenario is you just do more studies and see who can predict their outcome.

      But the “only bad studies show homeopathy” thing seems to me simplistic. If it were as simple as “all the studies that show homeopathy forget to include a placebo group”, then fine. But the problems are actually much subtler than that, sometimes no one knows exactly what they were, and a smart homeopathist could probably come up with equally plausible sounding problems with the studies that debunk homeopathy.

      So I think the problem I’m trying to bring up here is that the standard phrasing suggests Science succeeds at its goal of mechanistically producing truth in a way that leaves no room for opinion, whereas the reality is that Science sort of chips away little by little at the need for opinion and is able to inject some truth in there, but it’s still really really ugly.

      This is why we have things like meta-analyses, Cochrane Reviews, and why a lot of really well-studied questions (like whether antidepressants outperform placebo in mild depression) are still very controversial.

      There are probably “good studies” for most people’s definition of good on both sides of the antidepressant study. One side wasn’t good enough.

      • Ken Arromdee says:

        My pointing out that people who say “no evidence” don’t literally mean “no evidence” isn’t a No True Scotsman fallacy, it’s trying to point out that they mean something different from what you’re interpreting them to mean.

        Just because someone comes up with a string of different flaws doesn’t mean he’s making it up as an excuse after the fact. There are, after all, a lot of different ways in which studies can be flawed.

        But the problems are actually much subtler than that, sometimes no one knows exactly what they were

        “There’s no obvious flaw, but it hasn’t been reproduced by scientists enough that we can really be sure it proves anything rather than being a one-time fluke with an unknown cause” counts as a flaw. Such studies are also characteristic of pathological science because they appear near the limit of detectability.

  30. blacktrance says:

    While there is a danger to saying something like, “Look at these dumb people, we’re not like *them*, are we?”, there is also a way in which something similar can be useful – “Look at these people having these erroneous beliefs, how do we find out if we’re making similar mistakes that would look as dumb to a more knowledgeable person?”.

  31. St. Rev says:

    Inoculation is when you use a weak pathogen like cowpox to build immunity against a stronger pathogen like smallpox.

    Sort of like…homeopathy.

  32. suntzuanime says:

    Yes, by all means, let’s not fall into the trap of Othering, like those nasty Others do.

  33. adbge says:

    (yes, I did just use “self” as a verb. I don’t even have the excuse of it being part of a philosophical tradition)

    Well, it’s a part of this philosophical tradition.

    And if anyone needs doubt training, try adopting a 1:1 reading ratio. If you lean left, read Chomsky, then read The Anti-Chomsky Reader, then read another smart progressive, then a smart conservative. Repeat.

    The generalization being, of course, explicitly attempt to read the opposite. Any contentious belief can be paired this way — reductionism, utilitarianism, whatever. Don’t stick to books, either. Surrounded by vegan or feminist jokes? Try reading r/feminism or whatever — you know, expose yourself to people who actually hold that belief.

    • Benquo says:

      Finding opposite opinions that are likely to convince you is difficult to do in a single swoop. I would instead suggest finding the most reasonable-to-you-sounding person in that direction, finding out who they read further in that direction, and iterating.

      For example, I moved leftward from libertarianism by reading Matthew Yglesias and Will Wilkinson, not Marx, and rightward by reading Scott, not neoreactionaries or regular conservatives.

    • ChristianKl says:

      When I go looking for smart conservatives, where do I go?

    • johnwbh says:

      I do my best to do this, but I worry I might selection bias towards things I disagree with in interesting ways I enjoy, not things that genuinely infuriate or confuse me. (I suspect this is why a lot of progressive/rationalist types like reading neoreactionaries more than mainstream conservatives)

      *Any recommendations for smart deontologists or defenses of continental style philosophy?

  34. a person says:

    The more we concentrate on homeopathy, and moon hoaxes, and creationism – the more people who have never felt any temptation towards these beliefs go through the motions of “debunk”-ing them a hundred times to one another for fun – the more we are driving home the message that these are a representative sample of the kinds of problems we face.

    Completely agree. Going through the motions of “debunk”-ing these sort of things a hundred times to one another for fun seems like an excellent summary of the “skeptic community”, a community that I have always disliked. It seems like a bunch of atheists got bored rehearsing the same ten or so arguments for atheism and decided to move onto similar issues they could argue with imaginary people about. These people have decided that they like science and things that look like science, and don’t like religion and things that look kind of like religion. They also have a general approval of academia and the mainstream opinion, as well as probably a moderate contempt for beliefs that are lower-class. Then they proceed to go through all the evidence for the side they support so that they can argue it better. In short, these people pick their beliefs on purely aesthetic grounds and then go on to find the evidence to support that belief. This is pretty much the polar opposite of rationality.

    • BenSix says:

      Indeed. Sometimes I long for the days when “nerd” made reference to one’s academic dedication and not one’s aesthetic preferences. Sure, being described in such terms often implied that one was to receive a wedgie but at least it also hinted at one’s accomplishments. Nowadays, to say that one is an admirer of science can mean nothing more than that one likes Dr Who and inspirational memes featuring Neil DeGrasse Tyson. Witness the “i fucking love science” Facebook page.

      So much “skepticism” seems to be the IQ equivalent of picking on the shortest kid in the playground. When it comes to matters of importance it can be startling how irrational such defenders of objective thought can be. The skeptic community has been riven by allegations of sexual harassment against some of its leading lights – a matter on which its members have very firm opinions given that none of the cases have seen the inside of a courtroom.

      • ozymandias says:

        Much of the controversy about sexual harassment in the atheist community is about sexual harassment of convention-goers. It is not illegal to sexually harass a person attending a convention (it is illegal to sexually harass your *employee*, but if someone is just going to the con it’s not). Therefore, much of the controversy could not possibly end up in a courtroom.

        Of the ones that don’t end up in a courtroom, like… have you seen how much shit people get for accusing people of sexual harassment, even without naming the person they’re accusing? Going to court could only make that worse. It seems perfectly reasonable to me that a legitimately sexually harassed person would go “actually, I want *fewer* death threats rather than *more*, thank you.” In addition, most of the groups accused are charities that are doing good work, and legal defense is expensive; it seems reasonable that people might want to put pressure on them from the outside rather than drawing money away from the good work that they’re doing. And it seems reasonable that the level of certainty that would make you comfortable making someone pay a fine is higher than the level of certainty that would make you comfortable avoiding someone or not inviting them to parties.

        • BenSix says:

          I don’t want to name names but the cases I was thinking of concern one gentleman who may or may not be rad and another who I’ll hint towards in rhyme because I have no way of murmuring it. My understanding is that both of them are headed to the courts but perhaps I am misinformed on that.

          Regardless, it seems irrational to draw firm conclusions from either one of different accounts of events that involved absolute strangers (and this view is only reinforced by the fact that the conclusions skeptics appear to have drawn depend entirely on which side of “sexual harassment at conferences” affair they took).

    • peterdjones says:

      +1

      Imo, the people who are more right about rationality are the philosophers. They are generally delighted to find a good arguments for unpopular topics, as it makes things more interesting.