Open threads at the Open Thread tab every Sunday and Wednesday

I Myself Am A Scientismist

I.

“Science can tell you about rocks and molecules and stars. But what kind of science can tell you about the deepest recesses of the human soul?

I hear this a lot, and I want to answer “Psychology! It’s this whole science that totally exists and is all about that!” But then they would just change “deepest recesses of the human soul” to “how to be a good person”, or “whether life has meaning” or whatever.

Of course, there are sciences that bear on these questions. For example, biology can tell us a lot about the evolutionary origins of our moral intuitions, which sounds like the sort of thing that might be useful if you’re trying to figure out how to be a good person. But the overall claim that empiricism and experiment cannot single-handedly solve these problems for us seems to me to be correct.

“Scientism” is a purported fallacy in which people naively believe that science can solve everything. Wikipedia defines it as “belief in the universal applicability of the scientific method.” But for a problem that’s supposedly so common, it lacks a sort of at-all-believability.

I mean, this should be – pardon my scientism – an empirical question. Has someone done an experiment that has figured out how we should live our lives? Is there a grant proposal in the works for such an experiment? Does anyone seriously believe we may one day figure out the best way to live by splitting the hedon in a giant particle accelerator? No? Then who exactly are these so-called…wait, that doesn’t work…er…can we call them scientismists? Is that a word? No? Okay. But who are they?

When I hear people accused of scientism, they’re not trying to determine the moral law with particle accelerators. They’re trying to determine the moral law the same way their accusers are – thinking about it for a while, devising long jargon-filled arguments, then publishing articles in philosophy journals. They are doing nothing remotely resembling the scientific method. Nor do they especially connect with the results of science. Consequentialists are accused of scientism a lot, but there’s nothing in consequentialism incompatible with the planets being pushed around by angels, or thunder happening when the gods go bowling. Something else has to be going on here.

On some level it seems to be about personalities, what academic-y types call inter-departmental squabbling. The people making the accusations of scientism are culturally sophisticated types who read Cicero and Plato, who write in flowery prose, who speak fluent French. The people getting accused are geeky types who read Einstein or Feynman, who write in dense mathematical notation, who program in C. On some level, it expresses that oldest of human requests: “Aaaagh! Foreigners! Get off my turf!”

But I don’t think it’s just about turf battles. I think people are right to identify scientism as a thing. These two groups of people think differently. There are different processes going on in their minds. They will reach different results. Even if neither side suddenly breaks out a test tube, one side will be doing something fundamentally more scientific than the other.

And let me show my colors: I think one of them is doing something better. I myself am a scientismist. I think the impact of having people thinking scientifically in non-scientific fields is usually good.

II.

Why should that be? If science is just about rocks and molecules and stars, why would scientific training and knowledge give you an advantage in unrelated fields?

From Shakespeare’s Cassius: “The fault, dear Brutus, is not in our stars / But in ourselves.”

I don’t think science should inform philosophy because of what it’s discovered about stars. It should inform philosophy because of what people in the process of investigating stars have incidentally discovered about the faults in themselves.

Imagine a prankster with superhuman skill in ophthamological surgery manages to cut open and rearrange your eyes while you’re asleep. She gives your vision a sort of tilt-shift effect that makes everything appear smaller. And at the time, you happen to be on a World Tour.

Your friend asks you how Paris is, and you say: “It looks very small! It’s full of tiny people and a miniature Eiffel Tower!” Your friend corrects you and tells you Paris is actually normal sized.

Then you’re in London. You mention how it’s full of dwarves and a cute little clock tower the size of a sewing needle. Once again your friend corrects you and tells you London is normal size.

The next week you’re in Beijing. You’re tempted to dismiss it as a city of midgets and of medium-sized portraits of Mao. But by now you’ve wised up. Your experiences in Paris and London have taught you that there’s something wrong with your vision and you had better be more careful.

A detractor might say “What can learning about Paris and London possibly teach you about Beijing? It’s on a totally different continent and steeped in a totally different culture. Lessons learned in Europe just don’t transfer!” But as long as you’re using the same faulty vision to view each city, the lessons learned do transfer. Even if facts about China are completely uncorrelated with any facts about Europe, your errors about both will be correlated because it’s the same person erring each time.

III.

For all that it stresses empiricism, science isn’t just about experiment and observation. It’s also got a theoretical side. The interesting part of science is that it’s a calibration process. You use your theorizing faculties, and then you perform experiments to see if you were right or wrong.

Just as a biologist-engaged-in-experiment is testing different drugs to see whether they cure disease, a biologist-engaged-in-theory is (usually unintentionally) testing different mental algorithms to see whether they correctly predict which drugs will cure disease, or can generate disease cures.

And just as experimental science may discover that the witch doctor’s technique of drilling a hole in the skull to let out the evil demons is not in fact best practice, so theoretical science may discover that certain reasoning techniques don’t stand up to scrutiny either.

One of these which has been downright mythologized is the story of How We Learned That Things Aren’t Usually Caused By Sentient Agents. Back in the old days rain was caused by the Rain God and disease was caused by the Disease Demon, but then we discovered that these were actually natural processes and not people at all, and (so the myth continues) One Day We Will Finally Complete The Process By Ceasing To Believe In God.

The only problem with this narrative is that as far as I know we stopped believing in the Rain God and the Disease Demon long before we had any good experimental science or even any naturalistic alternative explanations for rain or disease. I’m not sure why this is, but it makes it less than a perfect victory for Science.

Still, some very similar stories are. The Copernican Principle, for one, where we gradually lost belief in our own uniqueness and went from “Earth holds a privileged position” to “The solar system holds a privileged position” to “The galaxy holds a privileged position” to our current and obviously-correct “Okay, there’s a whole universe out there, but it definitely has a privileged position and doesn’t split up into lots of different equally real quantum branches”.

There are other principles without equally catchy names. The “No, You Can’t Just Treat Human-Level Interesting Categories As Ontologically Real Primitives” principle, which I suppose one could call the Huxleyan Principle after the biologist who worked the hardest to discredit elan vital. The “Stop Using Value-Based Explanations” principle, which can be used with equal aplomb against everyone from the old Great Chain of Being theorists to high school biology students who insist that evolution is progress from “worse” to “better” organisms.

(life hack: Does saying “worse” and “better” make you feel unscientific? Just replace these words with “less complex” and “more complex”, then pretend these terms have objective meanings!)

IV.

Each of these principles works not because of the particular field it is applied to, but because it compensates for a defect in our own reasoning faculty. Our brain evolved mostly to think about other humans, thinking about other humans is the first thing it wants to do in any situation, so we end up with a bias towards anthropomorphism. We are clearly very important to ourselves, so we project this onto the universe and think we (or our planet, or our star, or our galaxy) must be at the center.

Therefore, the correct application of these principles is an antiprediction, a sort of easily defensible sticking to a default position. For example, “the world will probably not end on January 18, 2020” is an antiprediction, because we have no reason to think that it should. It is very difficult to predict the future, but this is no argument against my claim that the world will probably not end on January 15, 2020. No one gets to shake their head and say “That’s kind of arrogant of you to think that you can know that.”

Antipredictions do not always sound like antipredictions. Consider the claim “once we start traveling the stars, I am 99% sure that the first alien civilization we meet will not be our technological equals”. This sounds rather bold – how should I know to two decimal places about aliens, never having met any?

But human civilization has existed for 10,000 years, and may go on for much longer. If “technological equals” are people within about 50 years of our tech level either way, then all I’m claiming is that out of 10,000 years of alien civilization, we won’t hit the 100 where they are about equivalent to us. 99% is the exact right probability to use there, so this is an antiprediction and requires no special knowledge about aliens to make.

The antipredictive nature is surprising because certain possibilities stand out more clearly to us. Aliens being around our tech level is narratively interesting – we can fight wars on an equal footing or engage in mutually profitable trade. Certainly the idea of meeting aliens who have been stuck at the tech level of Assyria for a thousand years is less available.

We can say that the hypothesis-space is distorted: that equal-tech aliens looks like it takes up a very large area, even though it is tiny.

In the same way, certain salient regions of hypothesis space that correspond to natural human thought processes falsely appear very large, and certain other regions that don’t correspond to natural thought processes falsely appear much smaller.

If you’ve calibrated yourself on previous problems, then “The ground of being has to be a person” should bring up alerts like “Wait a second, it also seemed like rain had to be a person”.

And “I bet moral value is this objectively real conceptual primitive of perfect simplicity” should bring up alerts like “Wait a second, it also seemed like life had to be an objectively real conceptual primitive of perfect simplicity, and it ended up being this.”

This should work the same way that the observation “Beijing seems full of tiny little Chinese midgets” brought up the alert “Wait a second, Paris also seemed full of tiny little French midgets”.

V.

Beyond these specific problems like the Copernican Principle lies a greater a problem which makes all others pale into insignficance.

People who haven’t calibrated their theorizing against hard reality still think verbal reasoning works.

There have been a couple hundred proofs of the existence of God thought up throughout the centuries. And more recently, there have also been a couple hundred proofs of the nonexistence of God thought up. Clearly, a couple hundred proofs of something doesn’t make it so.

“But no one ever said something must be true just because someone has published a proof! The proof must be correct! The proofs of the existence/nonexistence of God are just wrong!”

Well, yes. Of course. But which side’s proofs you think are wrong tend to have a very very very strong correlation with which side you personally subscribe to.

Our faculty for evaluating chains of deductive reasoning similar to proofs of the (non)existence of God, or a lot of what goes on in philosophy, or god help us politics, is – pardon my language – really shitty. And we never realize this, because it is selectively shitty. It tells us it has logically evaluated arguments, and determined our opponents’ arguments are wrong, and our own arguments are right. And this is nice and consistent and convenient so we assume it must know what it’s doing. If it gets proven wrong once or twice or sixty times, we can dismiss that as a fluke, or an edge case, or It’s Beside The Point, or The Real Question Is Whether You Are Racist For Even Bringing That Up.

The thing I notice about scientists who branch out into other fields and get accused of scientism is that they tend to be minimalists. They’re always the ones saying there isn’t something. There probably isn’t a god. There probably isn’t Cosmic Consciousness. There probably isn’t any particular moral law beyond your actions just having effects in the world.

And their opponents believe this is because they fetishize Science as the only thing that can possibly be real. You can see bacteria under a microscope, you can see atoms under a microscope, but you can’t see God under a microscope, and therefore if they don’t believe in God it’s because they have obstinately decided only to believe in Science-y things.

But in fact, it is exactly the reverse. These skilled wielders of rejection first trained themselves on Science-y things. Lamarckian evolution. Steady State theory. The planet Vulcan. The four humors. The blank slate. Radical behaviorism. Catastrophism. Recapitulation theory. The luminiferous aether.

By holding scientific theories, which can be and are disproven, they trained themselves in Doubt. And that Doubt continues to serve them when they branch into other areas where theories cannot be disproven so easily. And maybe they will be less easily swayed by attractive verbal arguments.

The people who get accused of scientism are not all themselves scientists, and even those who are may never have suffered a mistake equal in enormity to believing in luminferous aether. But they’re steeped in the culture. They’ve absorbed the mores. Even if they have no scientific virtue themselves are merely aping the motions of their betters, those motions themselves contain certain safeguards against some of the most atrocious errors.

I don’t believe such scientifically informed people, when branching off into other fields, will always or even often be right. But I think they have a better chance than people working from intellectual traditions that have never gotten to calibrate their thought processes in the same way.

And that is why I consider myself a scientismist. I know it is supposed to be a perjorative, but I am reclaiming it. And I know it has many definitions, but this one is mine:

A view of hypothesis-space that accounts for human fallibilities, as revealed by past experiences.

And a very, very high burden of proof before zeroing in on any one area of that space.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

44 Responses to I Myself Am A Scientismist

  1. Anonymous says:

    You should read Sam Harris and meet some of the millions of people who have already done so. For a bonus, here’s a link to a shrine to him:

    http://www.reddit.com/r/badphilosophy

    • Carinthium says:

      I honestly can’t tell- is that site satirical or not?

      • Berry says:

        Poe’s Law might apply?

      • Anonymous says:

        The top post of all time is “So guys and girls, who wants to see all the crazy posts that are picked up by the mods of /r/philosophy before you get a chance to rip them apart?”.

        So it’s pretty explicitly about making fun of bad philosophy.

  2. Benito says:

    In Keith Stanovich’s book “How to Think Straight About Psychology”, he defines ‘Science’ and ‘Scientific Knowledge’ (in Chapter 1) by three definitions:

    1) Systematic Empiricism
    2) The Production of Public Knowledge
    3) The Examination of Solvable Problems

    (This means setting up experiments to observe which state reality is in (done so as to differentiate between different theories), only working with ideas that are not forever private (replicable, peer reviewed) and making sure the theories make statements about how the world is and is not, so that they are falsifiable. They must also be in published papers)

    This is actually very specific, and very narrow. For example, this draws a strong distinction between science and philosophy, because science is all about what the different theories predict, experimenting, and publishing them to an acceptable standard.

    I wonder if some people (Sam Harris, what a lovely human being, included) have gained a little affective death spiral around ‘science’. Like the Marxist who sees the truth of his theory (now a dogma) in every shop he enters, the philosopher/atheist/public speaker who sees the power of science in every question – surely science can tell us everything there is to know!

    For the most part, moral theory current lies in the domain of philosophy. Saying that ‘philosophy is just science’ is forgetting the virtue of narrowness. I do think that we will have advanced ethically when morality becomes the domain of science, because we will understand it clearly enough for all problems to be empirically solvable, but right now it isn’t. The rationality laws discovered in this endeavour will be useful elsewhere, but let us not say ‘everything is connected – to my favourite idea!’.

    Good reasoning in general can fall under the term ‘rationality’. Science is not philosophy, is not history, and is not engineering.

    I’m not particularly confident in what I have just said. Perhaps the word refers to something now far more than the previous definition, and this should be embraced as a worldview. I was thinking of writing it up as a LW discussion post – for further discussion.

  3. Carinthium says:

    Besides the well-made Sam Harris point, I might point out a few things:

    1: The question of “why trust the senses”, “why trust memory” etc cannot be solved by science without the use of circular arguments. In addition, the rules of logic affect scientific discourse but aren’t entirely scientific. Finally, of course, there is philosophy of science.
    2: The question of Foundationalism v.s Coherentism v.s Infinitism v.s Global Skepticism v.s various minor variants/mixes of these is one that cannot be contributed to greatly by science. This is because any attempt to apply empirical evidence to the question is itself a circular argument.
    3: There are many possible ethics compatible with all known scientific data. Other than the sort of utlitarianism you are likely very familiar with, another possibility would be a scientific “racist” who says “It is natural for human beings to value those close to them or more similiar to them than those less similiar, so I do despite knowing their similiarity on many levels”, psycopathy, a moral guiltist who says “I act in a pseudo-moral manner to appease my conscience but try to minimise its impact and serve my self-interest, so many seemingly inconsistent actions aren’t”, etc.
    4: Even if you accept a relatively standard ethics, the question still arises of what the measure of value is. The standard measure of calling all humans equal is highly simplistic and open to easy criticism for being arbitrary unless imposed as a starting point- but is it something to do with being more or less conscious, with being more or less intelligent, or something else?

    I will emphasise the point- your views on Sam Harris are quite interesting and relevant to the discussion. Sam Harris makes the mistake of assuming that utilitarianism can be established without actually discussing the matter.

    • Ray says:

      The circularity critique cuts both ways. You can’t demonstrate anything in philosophy without taking some premises as starting assumptions. Those premises cannot be philosophically justified without circular argument either.

      • Carinthium says:

        That’s not quite true. Excluding If X then Y statements, philosophy is capable of rejecting sets of reasoning as unsound without resort to any assumptions by demonstrating ways in which said reasoning is false.

        Besides, it is irrational to believe anything that cannot be justified without resort to base assumptions- to the extent philosophy does rely on assumptions, the only course of action that escapes accusations of being arbitrary and faith-based is to reject said parts of philosophy along with science.

        In terms of possible starting assumptions as necessary, there aren’t as many as you might think. Logic can be justified on the basis of the law of non-contradiction and the fact that, like a universe with moral facts that ‘just are’, theories that are logically self-contradictory are so internally incoherent no possible universe could exist with them. A rational philosopher would likely discard ethics and simply say that people want things, what to want is inherently arbitrary, and empirical reasoning should be used to determine how to achieve one’s desires if effectiveness is a goal.

        The only real problem of assumptions in philosophy is the assumption that memory is reliable. This is a major limiter on philosophy, but in the way already explained is not a total limiter because if you wish to establish that something COULD be untrue, it is a lot easier.

        For example, a Coherentist could, if simplifying Coherentism, argue that because reality is self-consistent we ought to accept it as real. A philosopher could point out that it is easily possible for something to be entirely Coherent and yet false as a counter to this.

        • Ray says:

          “Logic can be justified on the basis of the law of non-contradiction”

          Exactly my point. How do you justify the law of non-contradiction non-circularly? You give an attempt:

          “theories that are logically self-contradictory are so internally incoherent no possible universe could exist with them.”

          but how do you know that’s true other than by way of experience trying to formulate such theories? Not the best justification, if what you’re trying to justify is empiricism.

          I am also dubious that there are as few assumptions in philosophy as you think there are. Formal logic (at least of the sort that’s powerful enough to talk about the integers) has many more axioms than just the law of non-contradiction.

          Philosophy doesn’t use formal logic so much as natural language, but that doesn’t simplify matters. Yeah you can sort of sum up all the assumptions as “words mean what I think they mean,” but as computer programmers trying to do natural language processing have discovered, unpacking that sentence is not a simple matter.

      • Carinthium says:

        In addition, come to think of it, your argument dosen’t address the question of a Global Skeptical view which says “We are not justified in believing anything except our own lack of justification.” Rejecting such a view for no reason is clearly silly, and given the strong arguments for the view it has a strong case. If it is to be treated as a serious contender, however, any view which makes assumptions loses out to Global Skepticism because Global Skepticism doesn’t.

        • Ray says:

          “We are not justified in believing anything except our own lack of justification.” sounds an awful lot like an unjustified assumption to me. (Well, strictly speaking, it’s circularly justified, but whatever. You were the one asserting that that was a problem.)

          My point is simply that both philosophy and science are in the business of reasoning from assumptions. Neither the philosopher nor the scientist is in the business of reasoning from no assumptions whatsoever. I see no evidence that the assumptions philosophers use are any more reasonable than those scientists use.

        • Carinthium says:

          A- I am not referring to empirical evidence, but to a theoretical exercise of trying to imagine such a universe and figure out how it works. By what definition is this empirical, given that it doesn’t rely on sensory data but concepts in the mind for whatever reason?
          B- I don’t know enough about formal logic, but those axioms which cannot be justified by the law of non-contradiction should be rejected unless they are merely definitions of terms. The rest of it really is unjustifiable.
          C- Words don’t have objective meanings- to say that they do is on a tenability level roughly on par with free will. When using the word, you are really referring to the word’s meaning. No assumption required. Yes there are problems through unpacking, but not ones that affect the relevant philosophical processes.

          As for the remaining methods of justification, some models (such as religious faith and a coherentism that extends to ethics) lead to contradictions. With the others, I may have mispoken slightly (though I’m still not sure)- perhaps it would be more accurate that it would be as irrational as faith in God to call them justified, and irrational in the same way, due to the lack of real reasoning to demonstrate justifiability.

          Finally, of course, there is the question of how you distinguish Science from blind faith if you consider both to rely on axioms. How are you supposed to consider Science superior to a hypothetical religious faith that is self-consistent? (Some have problems with self-consistency, but the challenge is still great enough as there are many entirely self-consistent models of the world that differ greatly from the scientific one)

    • Benito says:

      (I assume you were talking to me)

      Your points contains good examples of philosophy that is not science, but I don’t want to understate the value that knowing science can bring to philosophy, (as well as history and many other fields).

      e.g. Science contains the knowledge of how value works in the brain, which I think is a massive hint for ethicists.

      I’m just trying to note possible death spiral, and apply the necessary counter-measures.

      P.S. I used Sam Harris as an example, I think that he is one of many. I suppose I target him with my thoughts, because I like what he says and writes so much that I’m trying to compensate (http://lesswrong.com/lw/31i/have_no_heroes_and_no_villains/)

      • Carinthium says:

        I’ll discuss more examples if you have them, but you shouldn’t overestimate the value of scientific knowledge for philosophy. It matters only to those ethicists who think that human nature is relevant to ethical questions- this may sound like common sense, but it is an assumption that is pretty hard to justify without the use of pragmatic reasoning (which, in a sound philosophical system, itself needs justification).

        Some people, such as Elizier Yudoksy, try to get around the problem by redefining “good” as, if I remember correctly, relating to the welfare of human beings. The problem is that this presumes the importance of conscious organisms, various ideas of welfare etc which he hasn’t justified. To an extent his view has to say “You should be as utilitarian and thus selfless as possible, rather than as selfish as possible, just because.”

        • von Kalifornen says:

          With TDT, selfishness and selflessness become identical.

        • Carinthium says:

          Von Kalifornen, can you clarify your reasoning here? I admit I haven’t read enough of the relevant stuff, but if I remember correctly this is because of them are wants and from a metaethical perspective equivalent. But as far as I can see it in practice:
          -Take the hypothetical where a person has two courses of action. One benefits them overall and is bad from a utilitarian perspective, the other makes them worse off overall but is good from a utilitarian perspective. This is factoring for guilt and future consequences of such selfishness which of course exist. Elizier’s theory, if I understand it right, has no way to choose between them. Despite this, Elizier’s ethical writings never seem to treat this is a question worth discussing.

          I know about “You can’t argue with a rock”, but the difference is that real people do sometimes come into dilemnas which are basically “Should I be selfless or selfish?” A practical guide of human behaviour should thus adress such situations.

  4. Alexander D says:

    The attempt has been made, in the past, to suggest that the peculiar squint of Doubt wielded by science cannot be brought to bear on metaphysical questions – this was most famously conceptualized as “non-overlapping magisteria” by Steven Jay Gould (of Panda’s Thumb and punctuated equilibrium fame).

    http://rationalwiki.org/wiki/Non-Overlapping_Magisteria

    Incidental note: In the beginning of IV, I think you mean “anthropocentrism,” not “anthropomorphism.”

  5. Oligopsony says:

    It seems to me that “scientism”/”positivism” (even though positivism is a fairly specific thing) and “woo”/”postmodern” (even though postmodernism is a fairly specific thing) are more or less – and such things are very common – a Carlinesque maniac/idiot dichotomy. So the question is not which side is right, but where on the spectrum is right (for particular questions, and in particular ways, &c.) Obviously you don’t totally discount humanistic reasoning because it’s precisely what you’re employing in this post.

    (It also strikes me that “humanistic reasoning” is probably an overbroad category. I actually can’t tell if you’re targeting the pomo types or the Aristotelians, so I suspect you may be employing some outgroup homogeneity bias.)

  6. Douglas Knight says:

    I despise the word “scientismist.” How about “scientismian”?

    For the use of “scientism” with which you begin the essay, I think a clearer term would be “science triumphalism.” But at the end you seem to switch to “scientist triumphalism,” which is too far from any use to see it as a good faith effort at communication.

    I always use “scientism” to mean pretending to be scientific.

    • Michael Vassar says:

      I think that’s the traditional use of the word.

      • Douglas Knight says:

        It may be the traditional use, but I think it has lost and I’m going to stop using it.

        The traditional use it to accept Scott’s claim “Even if they have no scientific virtue themselves are merely aping the motions of their betters, those motions themselves contain certain safeguards against some of the most atrocious errors,” but to reject the conclusion that these safeguards are always enough to be better than non-scientific competitors. And certainly that we should pay attention and prefer people who are actually doing science to those going through the motions.

    • St. Rev says:

      I use the term ‘science fandom’, but not in quite the same context.

    • deathpigeon says:

      I solve the scienstimist problem by calling it sciencism and people who are like that sciencists.

  7. Crimson Wool says:

    “But human civilization has existed for 10,000 years, and may go on for much longer. If “technological equals” are people within about 50 years of our tech level either way, then all I’m claiming is that out of 10,000 years of alien civilization, we won’t hit the 100 where they are about equivalent to us. 99% is the exact right probability to use there, so this is an antiprediction and requires no special knowledge about aliens to make.”

    Wrong. Interstellar civilizations equivalent to our own in its first stages, while they make up a small temporal footprint, have a very large spatial footprint. For example, suppose that, on average, an interstellar civilization colonizes 98 planets before meeting another civilization. Ignoring those tasteless superadvanced aliens who muck up these calculations (as you have done as well), we have a 50-50 shot of meeting such a civilization, since even though it comprises only 1% of the total temporal space, it controls a disproportionate amount of territory. In order to have any kind of idea about probabilities, we need to have an idea of the spatial footprint of interstellar civilizations throughout their various stages.

    • Randy M says:

      Seems to me it all depends on the relative difficulties of interstellar travel vs biogensesis. If life can arise (and evolve) relatively easily but interstellar travel is a hard problem, we’ll find more young civilizations, if vice versa probably instead many colonies of one old civilization.

      We can draw some assumptions based on ourselves about the latter, but fairly little about the former, afaik.

  8. Sarah says:

    The most common (and most credible) criticism of trying to make a subject sciencey when it ain’t is basically a claim of overfitting.

    The Austrian economists say “Hey, you put all these mathematical models on economics but the economy doesn’t actually follow those models! Why don’t you scale back some of the math and just use common sense?” [For “common sense” read “our theories,” which I do not necessarily endorse.] The mainstream economists say “You guys are a bunch of dummies. How could you not be? Your papers don’t have any math in them!”

    When we’re *actually* in a state of ignorance (as with many areas in the social and biological sciences) it can be counterproductive to pretty up the field to make it look sciencey, because this obscures how little we know. Some things aren’t *ready* for the scientific method; they’d benefit from a little natural history. Just plain observation and guessing.

    • Randy M says:

      “Some things aren’t *ready* for the scientific method; they’d benefit from a little natural history.”

      Or, some fields are prone for confusing *hypothesis* with *conclusion*. A model that fits all the known evidence is a hypothesis waiting to be disproven, not a proven conclusion already.
      Eventually the theory will have been found to be very strong based on it’s predictive efficacy, but cannot be said to be so based on the past data used to build it.
      Is it too early to say climate scientists have been guilty of this of late? Or those who seek to convince the necessity of action by promoting a message more certain than the situation warrants.

      • Ray says:

        If the problem is overfitting, the solution to use a simpler mathematical model, not no mathematical model at all. Non-mathematical reasoning is actually MORE complex than mathematical reasoning (if you doubt this, compare natural language processing software to proof verification software.) Of course, if you have reason to believe the system you’re modeling is complex, you can’t take your simple model literally, but you can still be confident that whatever regularities you uncover this way are real, if not exact.

        The case where you would actually use non-mathematical reasoning is not when you’re at risk of overfitting, but when you’re at risk of throwing away useful implicit knowledge gained by cultural and biological inheritence (Mostly this concerns picking up on the subtleties of human behavior, especially natural language processing, but also things like having a sense for how likely various historical sources are to be lying, knowing what economic hypotheses are plausible given how humans act individually etc.) One example that immediately comes to mind here is the Bayesian approach to history — I think historians are MUCH better at figuring out how likely something is to be true in context, by way of fuzzy heuristic reasoning, than they are at stripping away that context and coming up with a prior probability.

    • Ray Perlner says:

      If the problem is overfitting, the solution to use a simpler mathematical model, not no mathematical model at all. Non-mathematical reasoning is actually MORE complex than mathematical reasoning (if you doubt this, compare natural language processing software to proof verification software.) Of course, if you have reason to believe the system you’re modeling is complex, you can’t take your simple model literally, but you can still be confident that whatever regularities you uncover this way are real, if not exact.

      The case where you would actually use non-mathematical reasoning is not when you’re at risk of overfitting, but when you’re at risk of throwing away useful implicit knowledge gained by cultural and biological inheritence (Mostly this concerns picking up on the subtleties of human behavior, especially natural language processing, but also things like having a sense for how likely various historical sources are to be lying, knowing what economic hypotheses are plausible given how humans act individually etc.) One example that immediately comes to mind here is the Bayesian approach to history — I think historians are MUCH better at figuring out how likely something is to be true in context, by way of fuzzy heuristic reasoning, than they are at stripping away that context and coming up with a prior probability.

    • Douglas Knight says:

      Wikipedia claims that when Hayek coined “scientism,” his complaint was not that the models in economics were too complicated, thus subject to overfitting, but that they were too simple.

      I have a lot of sympathy with both complaints against economics, but I don’t think the word “scientism” is useful if I can’t tell which complaint is meant.

  9. Deiseach says:

    Scientism, to me, seems to have had its apogee in 50s Sci-Fi (and I speak as a skiffy loving geek). Where the pulps were full of tales of daring young men going out with their sliderules and figuring out the mysteries of space, inventing robots, settling new worlds, and One Day (besides our flying cars) there would be no more war, poverty, or any of the other social evils Because Science.

    Asimov’s “Foundation and Empire” stories may be the uber-example of this: what would otherwise be called clairvoyance or magic or reading the future in a fantasy, is now a ‘science’ called psychohistory where, by dint of mathematics and bit of handwaving, a genius can extrapolate from the mass currents of humanity exactly what is going to happen not alone next week but centuries from now.

    That’s all well and good, but then we get the Seldon Plan, where the fate of humanity is mapped out and kept on the path for centuries by a Foundation dedicated to that very end – even where there are deviations that crop up, where things happen that are not foretold in the plan or outcomes look like they might turn out differently to what is forecast. Then, instead of altering your theory to fit the facts, a lot of crowbarring of situations and people happens to keep things on the rails, to make things come out the way the Plan says they should – and that, to me, is scientism: a religious-type faith in the sacred word because it’s not based on gods but on SCIENCE!!!!

    I mistrust vast social engineering efforts based on politicians scrabbling together a “Quick! We have to be seen to be doing something!” response and throwing a little sociology/psychology camouflage on top to make it seem more credible.

    As to the notion of the pre-Copernican privileged Earth, I would say that from the Dream of Scipio onwards, people were discussing the idea of the earth as a small, local thing; as referenced by Dante in Canto XXII of his “Paradiso”:

    133 With my eyes I returned through every one
    134 of the seven spheres below, and saw this globe of ours
    135 to be such that I smiled, so mean did it appear.
    136 That opinion which judges it as least
    137 I now approve as best, and he whose thoughts
    138 are fixed on other things may truly be called just.
    139 I saw Latona’s daughter shining bright,
    140 without that shadow for which I once believed
    141 she was both dense and rare.
    142 The visage of your son, Hyperion, I endured
    143 and saw how Maia and Dïone move
    144 around him in their circling near.
    145 Then I saw the tempering of Jove between his father
    146 and his son, and the changes that they make
    147 in their positions were now clear.
    148 All seven planets there revealed
    149 their sizes, their velocities,
    150 and how distant from each other their abodes.
    151 The little patch of earth that makes us here so fierce,
    152 from hills to rivermouths, I saw it all
    153 while I was being wheeled with the eternal Twins.
    154 Then I turned my eyes once more to those fair eyes.

    • Sarah says:

      What I often see (interestingly, I see it most in traditionalist religious conservatives to the right of the US Republican Party and in liberals or socialists to the left of the US Democratic Party) is a sort of knee-jerk anti-triumphalism.

      “These people think they can just defeat disease and poverty and war with Science! What childish, arrogant fools.”

      Except, um, science has made a significant dent in world hunger and infectious disease, and while I don’t know if this is a scientific achievement, wars have become less bloody than they once were…

      There’s a tendency to mock anybody who seems too cheerful or confident about the future. Especially if they *are* “daring young men”. Young men are stereotyped (perhaps with a grain of truth) as arrogant and foolhardy and callous to the suffering and difficulties of others. On the other hand, young men are disproportionately represented among inventors, explorers, founders, and so on.

      I’d say, give the daring young men a chance. They won’t always be right, but they’ll try, and we don’t win if we don’t try.

      • Deiseach says:

        Well, the engineering model (“We will just figure out how to hit it with a wrench and that will fix it”) of progress so optimistically prevalent in the post-war era didn’t come to pass; there was indeed plenty of progress (heck, I remember getting the sugar lumps for polio) but somehow, darn it, in spite of more money, more education, more hygiene – people kept on being people and being greedy, angry, violent, lazy, careless and the whole shebang.

        I don’t doubt the use of science, or the application of it, but I do think that too much of the notion of “Now, if we can just put Chemical X into the water supply and make everyone be good” underlies some – not all, not most, but definitely some – of the ideas that “Science will explain everything and make it all better”.

        And I do think that it’s not real, actual, working scientists that do this kind of grandiose claims, but rather people with their own agendas who want to make society better if only X, Y or Z were adopted – and they are found on the left and the right, the conservative and the liberal, the believers and the materialists, the public intellectuals and the rabble-rousing political wings. I make no discrimination when it comes to human stupidity 🙂

  10. Salem says:

    When I use the word scientism, I normally mean activities that have the patina of science, but are not in fact scientific. Cargo-cult science, in other words, where the use of style and jargon is used to cover up that there is no there there.

    It’s true that too much credulity is harmful. But too much doubt – and in particular, selectively applied doubt – is just as harmful. I suppose it depends on what you see as the paradigm. For you, as you have laid out in the post, the paradigm is the Copernican revolution, with simplifying theories brushing away arbitrarily held prejudices.

    However, my experience of the social sciences is just the opposite – that people claiming the mantle of science hold grand unified theories based on hot air and assertion, and dismiss the actual evidence as superstition. My paradigmatic example would be the discoveries of Troy and Mycaenae. And it still took years, but eventually even the most dogmatic Marxist had to concede that the Trojan War was a real thing, and that Homer had knowledge of its circumstances. And even then, these Scientismists used just the same arguments as in your post, and claimed that it was ridiculous to privilege a date of 1253 BC. And got embarrassed again, and so on.

    The best way to settle questions is by empirical methods, with testable hypotheses and verifiable data. But there are various questions where these methods are not available, either because the data is insufficient, or experimentation is impossible, or because the questions are not sufficiently well-formed. In such cases, claiming that your guesses should be privileged “Because Science” is just a status move, and should be mercilessly attacked as such.

  11. Carl Shulman says:

    “Consider the claim “once we start traveling the stars, I am 99% sure that the first alien civilization we meet will not be our technological equals”. This sounds rather bold – how should I know to two decimal places about aliens, never having met any?”

    Perhaps too bold. Physical limits might be reached rapidly, and starships require very advanced technology, not to mention travel time before encountering aliens. So meeting equals at around the limits of technology (conditioning on meeting aliens at all) doesn’t seem so implausible.

    • von Kalifornen says:

      My own prediction: When we meet aliens (by selection of an volume of space, not by SETI or something else with major sampling bias) we will will find either primitive pre-moderns, super-civilizations, or civilizations at the technological limit. Where we will be, who knows.

  12. Paul Crowley says:

    Interesting that when (on Twitter) I pointed the journalist Oliver Burkeman at this article, he also raised the example of Sam Harris’s errors of moral philosophy as a great example of scientism, just as many commenting here have. Is there only one example of copper-bottomed scientism in the world? Can we have nine more points in this cluster before we draw a circle around it and give it a name please?

    • Paul Torek says:

      Alexander Rosenberg (http://en.wikipedia.org/wiki/Alexander_Rosenberg) “published a defense of what he called ‘Scientism’—the claim that ‘the persistent questions’ people ask about the nature of reality, the purpose of things, the foundations of value and morality, the way the mind works, the basis of personal identity, and the course of human history, could all be answered by the resources of science”. More discussion at http://blog.talkingphilosophy.com/?p=4209. Some of the “answers” to “persistent questions” amount to more like dismissals of the questions, but sometimes that’s a good thing.

  13. Will says:

    For me “scientism” has a slightly different connotation, that of people who are pretending to use the scientific method, but are actually not converging on truth.

    I probably get this connotation from reading Hayek, particularly his Nobel Prize lecture, though he never uses that term as such (preferring “scientistic”): http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/1974/hayek-lecture.html

  14. houseboatonstyx says:

    Very late comment. In the old view, the center of the Universe was not a place of honor. It was considered the worst place (short of Hell, which was the center of the Earth). It was the bottom, the gravity well of the Universe, where the dregs collected.

    For a short and readable treatment, see C. S. Lewis’s THE DISCARDED IMAGE. For equally readable but much longer and more detailed, Google
    site:m-francis.livejournal.com earth center of universe

  15. MugaSofer says:

    If it gets proven wrong once or twice or sixty times, we can dismiss that as a fluke, or an edge case, or It’s Beside The Point, or The Real Question Is Whether You Are Racist For Even Bringing That Up.

    Scott, are you ever going to explain your Deep Insights into this sort of thing? Obviously, some of these techniques may be evil. But you have unparalleled access to the rationalist community, so I’m guessing the instrumental value could be high.

    Oh, and I’m crazy curious, of course. Hmm, I think I’ll ask this on a few posts in the hope it’ll be seen.