As far as I know, there’s nothing everyone in philosophy agrees is right. But there are a few things that everyone in philosophy agrees are wrong. Right now I can think of two of them – Rene Descartes, and logical positivism.
Er, well, one of them. Because I kind of like logical positivism.
And before everyone yells at me, I understand this is sort of sketchy. I’m not saying that they were entirely right about anything, or that their criteria work exactly as stated. But it’s like – well, take those ancient philosophers who said everything was made out of atoms. In fact, they’re wrong. Light isn’t made out of atoms, mathematics isn’t made out of atoms, quarks aren’t made out of atoms, et cetera. But they were sure on to something, and I give them much more credit than philosophers who didn’t say everything was made out of atoms.
I am an inexact person, and I tend to think inexactly, and the fact that the logical positivists were working in an area that vaguely points to a cluster of correct things is good enough for me. Let me explain what I mean by “a cluster of correct things”.
In the 1700s, David Hume came up with an idea now called “Hume’s fork”, which Wikipedia describes like so:
Hume’s fork is an explanation of David Hume’s aggressive division between “relations of ideas” versus “matters of fact and real existence”. By Hume’s fork, relations among ideas are strictly divided from states of actuality.
(and before you ask, yes, Hume’s Fork is also a legendary artifact in Dungeons and Discourse)
Then in the 1900s the logical positivists came around; they get described as saying “that the truths of science are verifiable empirical claims and that the truths of logic and mathematics are tautologies. These two constitute the entire universe of meaningful judgements; anything else is nonsense.”
And in the 2010s, the latest Less Wrong Sequence, Highly Advanced Epistemology 101 for Beginners tried to reduce meaningful statements to “two kinds of meaningfulness and two ways that sentences can refer; a way of comparing to physical things found by following pinned-down causal links, and logical validity by comparison to models pinned-down by axioms” as well as combinations between them. This is a really good sequence and I especially recommend Mixed Reference which says a lot of the stuff below only better (I did avoid re-reading it until I was done writing this, so as to keep my brain at least a little in original-idea-mode).
These three claims are somewhat different, but they all seem to have kind of the same idea. They all seem to be saying that we can divide meaningful things into something-kinda-like-science, and something-kinda-like-logic, and that everything else is meaningless. Their differences seem to be mostly how “something kinda-like-science” gets cashed out.
And while acknowledging the importance of figuring out exactly what “something kinda-like science” means, right now to me it seems less important than the overall observation that having even this vague sort of poorly-specified system is a whole lot better than not having any system of this sort at all.
More importantly, this idea is productive. I admit my assertion of its productivity is highly biased, in that by “productive” I mean “things I want to hear fall out of it”, but I do find this happening with gratifying frequency. Usually when this system red-flags a statement as “probably not meaningful”, on further investigation it very often does turn out to be something bad that needs to be thrown out. And when this isn’t the case, I usually gain a much more solid understanding of the statement by trying to see exactly how it reduces to one of these two kinds of meaning or a combination thereof.
It may be that there are some kinds of statements that are meaningful but just plain don’t work this way. I am having trouble thinking of them, but it’s normal to have trouble coming up with statements that disprove a theory you like, and I’m sure I will hear many attempts at them in the comments. But even if they exist, they seem to me much like the “light is not made of atoms” example above – correct, but irrelevant to the fact that atomic theory is really awesome and that if someone goes around saying “That particular horse there is not made of atoms, and no one can force me to say that it is,” they’re probably making a mistake.
As an example of “things that I do think yield to this positivist technique”, let me give some classic examples of things that people say don’t.
Start with mathematics. This one is easy. Mathematical systems are systems of axioms and theorems. These are logical tautologies. Some of these systems seem to describe our world pretty well. This is an empirical observation.
Move onto morality. A little harder. The definition of any moral system is a logical tautology; if I define “utilitarianism” as “act for the greatest good of the greatest number”, I have defined what is…not quite a formal system, but could probably be turned into one if someone were rigorous enough. My brain seems to exist in such a way that it reflectively endorses this system rather than another one. That’s an empirical observation. This isn’t quite what most people want out of morality, but I consider that a feature, not a bug; morality the way most people want it is probably meaningless.
An even harder one: “If Lincoln had kept McClellan as general, the South would have won the US Civil War.” Maybe this describes the output of the logical formal system corresponding to the universe (!) with certain inputs empirically determined by the state of Civil War-era America.(I bet there’s a better way to rescue this one, but I can’t think of it.)
[EDIT: Now that I am done writing this I am looking at the Mixed Reference post above, and it does a pretty good job]
Finally, we come to logical positivism itself, which was famously accused of failing its own criteria by Karl Popper. Once again, everyone agrees it’s possible to define logical positivism. And by analogy with math and morals, now we just need to show that this system, as defined, matches some property we want it to have.
And obviously this is going to be sort of subjective, because “property we want it to have” depends on what properties we want of our ontologies. It might even end up circular, because a lot of the properties I want my ontologies to have probably sound a little like “Doesn’t talk about meaningless stuff, with meaningless being defined as [something that uses a lot of the same criteria as logical positivism]”.
But I think it might also correspond closely to that which can be debated. That is, what is there such that, using reason rather than emotion or made-up pseudologic, we can actually change our minds about and correctly judge as having one probability of truth rather than another?
I welcome examples of this being proven totally false.