David Chapman keeps complaining that “Bayesianism” – as used to describe a philosophy rather than just a branch of statistics – is meaningless or irrelevant, yet is touted as being the Sacred Solution To Everything.
In my reply on his blog, I made the somewhat weak defense that it’s not a disaster if a philosophy is not totally about its name. For example, the Baptists have done pretty well for themselves even though baptism is only a small part of their doctrine and indeed a part they share with lots of other denominations. The Quakers and Shakers are more than just people who move rhythmically sometimes, and no one gives them any grief about it.
But now I think this is overly pessimistic. I think Bayesianism is a genuine epistemology and that the only reason this isn’t obvious is that it’s a really good epistemology, so good that it’s hard to remember that other people don’t have it. So let me sketch two alternative epistemologies and then I’ll define Bayesianism by contrast.
Everyone likes to beat up on Aristotle, and I am no exception. An Aristotelian epistemology is one where statements are either true or false and you can usually figure out which by using deductive reasoning. Tell an Aristotelian a statement and, God help him, he will either agree or disagree.
Aristotelians are the sort of people who say things like “You can never really be an atheist, because you can’t prove there’s no God. If you were really honest you’d call yourself an agnostic.” When an Aristotelian holds a belief, it’s because he’s damn well proven that belief, and if you say you have a belief but haven’t proven it, you are a dirty cheater taking epistemic shortcuts.
Very occasionally someone will prove an Aristotelian wrong on one of his beliefs. This is shocking and traumatic, but it certainly doesn’t mean that any of the Aristotelian’s other beliefs might be wrong. After all, he’s proven them with deductive reasoning. And deductive reasoning is 100% correct by definition! It’s logic!
Nobody likes to beat up on Robert Anton Wilson, and I consistently get complaints when I try. He and his ilk have seen through Aristotelianism. It’s a sham to say you ever know things for certain, and there are a lot of dead white men who were cocksure about themselves and ended up being wrong. Therefore, the most virtuous possible epistemic state is to not believe anything.
This leads to nihilism, moral relativism, postmodernism, and mysticism. The truth cannot be spoken, because any assertion that gets spoken is just another dogma, and dogmas are the enemies of truth. Truth is in the process, or is a state of mind, or is [insert two hundred pages of mysterianist drivel that never really reaches a conclusion].
“Epistemology X” is the synthesis of Aristotelianism and Anton-Wilsonism. It concedes that you are not certain of any of your beliefs. But it also concedes that you are not in a position of global doubt, and that you can update your beliefs using evidence.
An Xist says things like “Given my current level of knowledge, I think it’s 60% likely that God doesn’t exist.” If they encounter evidence for or against the existence of God, they might change that number to 50% or 70%. Or if they don’t explicitly use numbers, they at least consider themselves to have strong leanings on difficult questions but with some remaining uncertainty. If they find themselves consistently over- or under-confident, they can adjust up or down until they reach either the certainty of Aristotelianism or the total Cartesian doubt of Anton-Wilsonism.
Epistemology X is both philosophically superior to its predecessors, in that it understands that you are neither completely omniscient nor completely nescient; instead, all knowledge is partial knowledge. And it is practically superior, in that it allows for the quantification of belief and therefore can have nice things like calibration testing and prediction markets.
What can we call this doctrine? In the old days it was known as probabilism, but this is unwieldy, and it refers to a variety practiced before we really understood what probability was. I think “Bayesianism” is an acceptable alternative, not just because Bayesian updating is the fundamental operation of this system, but because Bayesianism is the branch of probability that believes probabilities are degrees of mental credence and that allows for sensible probabilities of nonrepeated occurrences like “there is a God.”
“Jason” made nearly this exact same point on David’s blog. David responds:
1) Do most people really think in black and white? Or is this a straw man?
2) Are numerical values a good way to think about uncertainty in general?
3) Does anyone actually consistently use numerical probabilities in everyday situations of uncertainty?
The discussion between David and Jason then goes off on a tangent, so let me give my answer to some of these questions.
Do people really think in black and white? Or in my formulation, is the “Aristotelian” worldview really as bad as all that? David acknowledges the whole “You can’t really be an atheist because…” disaster, but says belief in God is a special case because of tribal affiliation.
I have consistently been tempted to agree with David – my conception of Aristotelianism certainly sounds like a straw man. But I think there are some inferential distances going on here. A year or so ago, my friend Ari wrote of Less Wrong:
I think there’s a few posts by Yudkowsky that I think deserve the highest praise one can give to a philosopher’s writing: That, on rereading them, I have no idea what I found so mindblowing about them the first time. Everything they say seems patently obvious now!
Obviously not everyone gets this Bayesian worldview from Less Wrong, but I share this experience of “No, everything there is obvious, surely I must always have believed it” while having a vague feeling that there had been something extremely revolutionary-seeming to it at the time. And I have memories.
I remember how some of my first exposure to philosophy was arguing against Objectivists in my college’s Objectivist Club. I remember how Objectivism absolutely lampshades Aristotelianism, how the head of the Objectivist Club tried very patiently to walk me through a deductive proof of why Objectivism was was correct from one of Rand’s books. “It all starts with A = A,” he told me. “From there, it’s just logic.” Although I did not agree with the proof itself, I don’t remember finding anything objectionable in the methodology behind it, nor did any of the other dozen-odd people there.
I remember talking to my father about some form of alternative-but-not-implausible medicine. It might have been St. John’s Wort – which has an evidence base now, but this was when I was very young. “Do you think it works?” I asked him. “There haven’t been any studies on it,” he said. “There’s no evidence that it’s effective.” “Right,” I said, “but there’s quite a bit of anecdotal evidence in its favor.” “But that’s not proof,” said my father. “You can’t just start speculating on medicines when you don’t have any proof that they work.” Now, if I were in my father’s shoes today, I might still make his same argument based on a more subtle evidence-based medicine philosophy, but the point was that at the time I felt like we were missing something important that I couldn’t quite put my finger on, and looking back on the conversation, that thing we were missing is obviously the notion of probabilistic reasoning. From inside I know I was missing it, and when I asked my father about this a few years ago he completely failed to understand what relevance that could possibly have to the question, so I feel confident saying he was missing it too.
I remember hanging out with a group of people in college who all thought Robert Anton Wilson was the coolest thing since sliced bread, and it was explicitly because he said we didn’t have to believe things with certainty. I’m going to get the same flak I always get for this, but Robert Anton Wilson, despite his brilliance as a writer and person, has a really dumb philosophy. The only context in which it could possibly be attractive – and I say this as someone who went around quoting Robert Anton Wilson like nonstop for several months to a year – is if it was a necessary countermeasure to an even worse epistemology that we had been hearing our entire lives – kind of like how the only excuse for the existence of Neoreactionaries is the existence of Social Justice Warriors. What philosophy is this? Anton Wilson explicitly identifies it as the Aristotelian philosophy of deductive certainty.
And finally, I remember a rotation in medical school. I and a few other students were in a psychiatric hospital, discussing with a senior psychiatrist whether to involuntarily commit a man who had made some comments which sort of kind of sounded maybe suicidal. I took the opposing position: “In context, he’s upset but clearly not at any immediate risk of killing himself.” One of the other students took the opposite side: “If there’s any chance he might shoot himself, it would be irresponsible to leave him untreated.” This annoyed me. “There’s “some chance” you might shoot yourself. Where do we draw the line?” The other student just laughed. “No, we’re being serious here, and if you’re not totally certain the guy is safe, he needs to be committed.”
(before Vassar goes off on one of his “doctors are so stupid, they don’t understand anything” rants, I should add that the senior psychiatrist then stopped the discussion, backed me up, and explained the basics of probability theory.)
So do most people really think in black and white? Ambiguous. I think people don’t account for uncertainty in Far Mode, but do account for it in Near Mode. I think if you explicitly ask people “Should you take account of uncertainty?” they will say “yes”, but if you ask them “Should you commit anybody who has any chance at all of shooting themselves?” they will also say yes – and if you ask them “What chance of someone being a terrorist is too high before you let them fly on an airplane, and don’t answer ‘zero’?” they will look at you as if you just grew a second head.
In short, they are not actually idiots, but they have no coherent philosophical foundation for their non-idiocy, and this tends to show through at inconvenient times.
Probability theory in general, and Bayesianism in particular, provide a coherent philosophical foundation for not being an idiot.
Now in general, people don’t need coherent philosophical foundations for anything they do. They don’t need grammar to speak a language, they don’t need classical physics to hit a baseball, and they don’t need probability theory to make good decisions. This is why I find all the “But probability theory isn’t that useful in everyday life!” complaining so vacuous.
“Everyday life” means “inside your comfort zone”. You don’t need theory inside your comfort zone, because you already navigate it effortlessly. But sometimes you find that the inside of your comfort zone isn’t so comfortable after all (my go-to grammatical example is answering the phone “Scott? Yes, this is him.”) Other times you want to leave your comfort zone, by for example speaking a foreign language or creating a conlang.
When David says that “You can’t possibly be an atheist because…” doesn’t count because it’s an edge case, I respond that it’s exactly the sort of thing that should count because it’s people trying to actually think about an issue outside their comfort zone which they can’t handle on intuition alone. It turns out when most people try this they fail miserably. If you are the sort of person who likes to deal with complicated philosophical problems outside the comfortable area where you can rely on instinct – and politics, religion, philosophy, and charity all fall in that area – then it’s really nice to have an epistemology that doesn’t suck.
A while ago I wrote a post called Arguments From My Opponent Believes Something (read it!) which I feel was sorta misunderstood.
I made fun of people who attack arguments on the grounds that “this is like a religion!” or “some people say this solves everything!”. Some people pointed out that, in fact, many things are like religions (religions being just the most obvious example) and sometimes people do fall into the trap of claiming their pet theory can explain everything.
And okay, I agree.
Those arguments aren’t dangerous because they’re never true. They’re dangerous because you can always make them, whether they’re true or not.
You can take any position in any argument and accuse the proponents of believing it fanatically. And then you’re done. There’s no good standard for fanaticism. Some people want to end the war in Afghanistan? Simply call them “anti-war fanatics”. You don’t have to prove anything, and even if the anti-war crowd object, they’re now stuck objecting to the “fanatic” label rather than giving arguments against the war.
(if a candidate is stuck arguing “I’m not a child molester”, then he has already lost the election, whether or not he manages to convince the electorate of his probable innocence)
And then when the war goes bad and hindsight bias tells us it was a terrible idea all along, you can just say “Yes, people like me were happy to acknowledge the excellent arguments about the war. It was just you guys being fanatics about it all the time which turned everyone else off.”
One of the wisest things I ever saw on Twitter (which is a low bar, sort of like “one of Hitler’s most tolerant speeches”) was on arrogance. “If someone you never met calls you ‘arrogant’, it means he can’t find anything else,” the tweet said. “Otherwise, he would have called you ‘wrong’.” My quotes file mysteriously labels this as “Heuristic 81 from Twitter”, without giving a source or any hint on what the other eighty heuristics might be.
The Arguments From My Opponent Believes Something are a lot like accusations of arrogance. They’re last-ditch attempts to muddy up the waters. If someone says a particular theory doesn’t explain everything, or that it’s elitist, or that it’s being turned into a religion, that means they can’t find anything else.
Otherwise they would have called it wrong.