“Science can tell you about rocks and molecules and stars. But what kind of science can tell you about the deepest recesses of the human soul?”
I hear this a lot, and I want to answer “Psychology! It’s this whole science that totally exists and is all about that!” But then they would just change “deepest recesses of the human soul” to “how to be a good person”, or “whether life has meaning” or whatever.
Of course, there are sciences that bear on these questions. For example, biology can tell us a lot about the evolutionary origins of our moral intuitions, which sounds like the sort of thing that might be useful if you’re trying to figure out how to be a good person. But the overall claim that empiricism and experiment cannot single-handedly solve these problems for us seems to me to be correct.
“Scientism” is a purported fallacy in which people naively believe that science can solve everything. Wikipedia defines it as “belief in the universal applicability of the scientific method.” But for a problem that’s supposedly so common, it lacks a sort of at-all-believability.
I mean, this should be – pardon my scientism – an empirical question. Has someone done an experiment that has figured out how we should live our lives? Is there a grant proposal in the works for such an experiment? Does anyone seriously believe we may one day figure out the best way to live by splitting the hedon in a giant particle accelerator? No? Then who exactly are these so-called…wait, that doesn’t work…er…can we call them scientismists? Is that a word? No? Okay. But who are they?
When I hear people accused of scientism, they’re not trying to determine the moral law with particle accelerators. They’re trying to determine the moral law the same way their accusers are – thinking about it for a while, devising long jargon-filled arguments, then publishing articles in philosophy journals. They are doing nothing remotely resembling the scientific method. Nor do they especially connect with the results of science. Consequentialists are accused of scientism a lot, but there’s nothing in consequentialism incompatible with the planets being pushed around by angels, or thunder happening when the gods go bowling. Something else has to be going on here.
On some level it seems to be about personalities, what academic-y types call inter-departmental squabbling. The people making the accusations of scientism are culturally sophisticated types who read Cicero and Plato, who write in flowery prose, who speak fluent French. The people getting accused are geeky types who read Einstein or Feynman, who write in dense mathematical notation, who program in C. On some level, it expresses that oldest of human requests: “Aaaagh! Foreigners! Get off my turf!”
But I don’t think it’s just about turf battles. I think people are right to identify scientism as a thing. These two groups of people think differently. There are different processes going on in their minds. They will reach different results. Even if neither side suddenly breaks out a test tube, one side will be doing something fundamentally more scientific than the other.
And let me show my colors: I think one of them is doing something better. I myself am a scientismist. I think the impact of having people thinking scientifically in non-scientific fields is usually good.
Why should that be? If science is just about rocks and molecules and stars, why would scientific training and knowledge give you an advantage in unrelated fields?
From Shakespeare’s Cassius: “The fault, dear Brutus, is not in our stars / But in ourselves.”
I don’t think science should inform philosophy because of what it’s discovered about stars. It should inform philosophy because of what people in the process of investigating stars have incidentally discovered about the faults in themselves.
Imagine a prankster with superhuman skill in ophthamological surgery manages to cut open and rearrange your eyes while you’re asleep. She gives your vision a sort of tilt-shift effect that makes everything appear smaller. And at the time, you happen to be on a World Tour.
Your friend asks you how Paris is, and you say: “It looks very small! It’s full of tiny people and a miniature Eiffel Tower!” Your friend corrects you and tells you Paris is actually normal sized.
Then you’re in London. You mention how it’s full of dwarves and a cute little clock tower the size of a sewing needle. Once again your friend corrects you and tells you London is normal size.
The next week you’re in Beijing. You’re tempted to dismiss it as a city of midgets and of medium-sized portraits of Mao. But by now you’ve wised up. Your experiences in Paris and London have taught you that there’s something wrong with your vision and you had better be more careful.
A detractor might say “What can learning about Paris and London possibly teach you about Beijing? It’s on a totally different continent and steeped in a totally different culture. Lessons learned in Europe just don’t transfer!” But as long as you’re using the same faulty vision to view each city, the lessons learned do transfer. Even if facts about China are completely uncorrelated with any facts about Europe, your errors about both will be correlated because it’s the same person erring each time.
For all that it stresses empiricism, science isn’t just about experiment and observation. It’s also got a theoretical side. The interesting part of science is that it’s a calibration process. You use your theorizing faculties, and then you perform experiments to see if you were right or wrong.
Just as a biologist-engaged-in-experiment is testing different drugs to see whether they cure disease, a biologist-engaged-in-theory is (usually unintentionally) testing different mental algorithms to see whether they correctly predict which drugs will cure disease, or can generate disease cures.
And just as experimental science may discover that the witch doctor’s technique of drilling a hole in the skull to let out the evil demons is not in fact best practice, so theoretical science may discover that certain reasoning techniques don’t stand up to scrutiny either.
One of these which has been downright mythologized is the story of How We Learned That Things Aren’t Usually Caused By Sentient Agents. Back in the old days rain was caused by the Rain God and disease was caused by the Disease Demon, but then we discovered that these were actually natural processes and not people at all, and (so the myth continues) One Day We Will Finally Complete The Process By Ceasing To Believe In God.
The only problem with this narrative is that as far as I know we stopped believing in the Rain God and the Disease Demon long before we had any good experimental science or even any naturalistic alternative explanations for rain or disease. I’m not sure why this is, but it makes it less than a perfect victory for Science.
Still, some very similar stories are. The Copernican Principle, for one, where we gradually lost belief in our own uniqueness and went from “Earth holds a privileged position” to “The solar system holds a privileged position” to “The galaxy holds a privileged position” to our current and obviously-correct “Okay, there’s a whole universe out there, but it definitely has a privileged position and doesn’t split up into lots of different equally real quantum branches”.
There are other principles without equally catchy names. The “No, You Can’t Just Treat Human-Level Interesting Categories As Ontologically Real Primitives” principle, which I suppose one could call the Huxleyan Principle after the biologist who worked the hardest to discredit elan vital. The “Stop Using Value-Based Explanations” principle, which can be used with equal aplomb against everyone from the old Great Chain of Being theorists to high school biology students who insist that evolution is progress from “worse” to “better” organisms.
(life hack: Does saying “worse” and “better” make you feel unscientific? Just replace these words with “less complex” and “more complex”, then pretend these terms have objective meanings!)
Each of these principles works not because of the particular field it is applied to, but because it compensates for a defect in our own reasoning faculty. Our brain evolved mostly to think about other humans, thinking about other humans is the first thing it wants to do in any situation, so we end up with a bias towards anthropomorphism. We are clearly very important to ourselves, so we project this onto the universe and think we (or our planet, or our star, or our galaxy) must be at the center.
Therefore, the correct application of these principles is an antiprediction, a sort of easily defensible sticking to a default position. For example, “the world will probably not end on January 18, 2020” is an antiprediction, because we have no reason to think that it should. It is very difficult to predict the future, but this is no argument against my claim that the world will probably not end on January 15, 2020. No one gets to shake their head and say “That’s kind of arrogant of you to think that you can know that.”
Antipredictions do not always sound like antipredictions. Consider the claim “once we start traveling the stars, I am 99% sure that the first alien civilization we meet will not be our technological equals”. This sounds rather bold – how should I know to two decimal places about aliens, never having met any?
But human civilization has existed for 10,000 years, and may go on for much longer. If “technological equals” are people within about 50 years of our tech level either way, then all I’m claiming is that out of 10,000 years of alien civilization, we won’t hit the 100 where they are about equivalent to us. 99% is the exact right probability to use there, so this is an antiprediction and requires no special knowledge about aliens to make.
The antipredictive nature is surprising because certain possibilities stand out more clearly to us. Aliens being around our tech level is narratively interesting – we can fight wars on an equal footing or engage in mutually profitable trade. Certainly the idea of meeting aliens who have been stuck at the tech level of Assyria for a thousand years is less available.
We can say that the hypothesis-space is distorted: that equal-tech aliens looks like it takes up a very large area, even though it is tiny.
In the same way, certain salient regions of hypothesis space that correspond to natural human thought processes falsely appear very large, and certain other regions that don’t correspond to natural thought processes falsely appear much smaller.
If you’ve calibrated yourself on previous problems, then “The ground of being has to be a person” should bring up alerts like “Wait a second, it also seemed like rain had to be a person”.
And “I bet moral value is this objectively real conceptual primitive of perfect simplicity” should bring up alerts like “Wait a second, it also seemed like life had to be an objectively real conceptual primitive of perfect simplicity, and it ended up being this.”
This should work the same way that the observation “Beijing seems full of tiny little Chinese midgets” brought up the alert “Wait a second, Paris also seemed full of tiny little French midgets”.
Beyond these specific problems like the Copernican Principle lies a greater a problem which makes all others pale into insignficance.
People who haven’t calibrated their theorizing against hard reality still think verbal reasoning works.
There have been a couple hundred proofs of the existence of God thought up throughout the centuries. And more recently, there have also been a couple hundred proofs of the nonexistence of God thought up. Clearly, a couple hundred proofs of something doesn’t make it so.
“But no one ever said something must be true just because someone has published a proof! The proof must be correct! The proofs of the existence/nonexistence of God are just wrong!”
Well, yes. Of course. But which side’s proofs you think are wrong tend to have a very very very strong correlation with which side you personally subscribe to.
Our faculty for evaluating chains of deductive reasoning similar to proofs of the (non)existence of God, or a lot of what goes on in philosophy, or god help us politics, is – pardon my language – really shitty. And we never realize this, because it is selectively shitty. It tells us it has logically evaluated arguments, and determined our opponents’ arguments are wrong, and our own arguments are right. And this is nice and consistent and convenient so we assume it must know what it’s doing. If it gets proven wrong once or twice or sixty times, we can dismiss that as a fluke, or an edge case, or It’s Beside The Point, or The Real Question Is Whether You Are Racist For Even Bringing That Up.
The thing I notice about scientists who branch out into other fields and get accused of scientism is that they tend to be minimalists. They’re always the ones saying there isn’t something. There probably isn’t a god. There probably isn’t Cosmic Consciousness. There probably isn’t any particular moral law beyond your actions just having effects in the world.
And their opponents believe this is because they fetishize Science as the only thing that can possibly be real. You can see bacteria under a microscope, you can see atoms under a microscope, but you can’t see God under a microscope, and therefore if they don’t believe in God it’s because they have obstinately decided only to believe in Science-y things.
But in fact, it is exactly the reverse. These skilled wielders of rejection first trained themselves on Science-y things. Lamarckian evolution. Steady State theory. The planet Vulcan. The four humors. The blank slate. Radical behaviorism. Catastrophism. Recapitulation theory. The luminiferous aether.
By holding scientific theories, which can be and are disproven, they trained themselves in Doubt. And that Doubt continues to serve them when they branch into other areas where theories cannot be disproven so easily. And maybe they will be less easily swayed by attractive verbal arguments.
The people who get accused of scientism are not all themselves scientists, and even those who are may never have suffered a mistake equal in enormity to believing in luminferous aether. But they’re steeped in the culture. They’ve absorbed the mores. Even if they have no scientific virtue themselves are merely aping the motions of their betters, those motions themselves contain certain safeguards against some of the most atrocious errors.
I don’t believe such scientifically informed people, when branching off into other fields, will always or even often be right. But I think they have a better chance than people working from intellectual traditions that have never gotten to calibrate their thought processes in the same way.
And that is why I consider myself a scientismist. I know it is supposed to be a perjorative, but I am reclaiming it. And I know it has many definitions, but this one is mine:
A view of hypothesis-space that accounts for human fallibilities, as revealed by past experiences.
And a very, very high burden of proof before zeroing in on any one area of that space.