Building Intuitions On Non-Empirical Arguments In Science

I.

Aeon: Post-Empirical Science Is An Oxymoron And It is Dangerous:

There is no agreed criterion to distinguish science from pseudoscience, or just plain ordinary bullshit, opening the door to all manner of metaphysics masquerading as science. This is ‘post-empirical’ science, where truth no longer matters, and it is potentially very dangerous.

It’s not difficult to find recent examples. On 8 June 2019, the front cover of New Scientist magazine boldly declared that we’re ‘Inside the Mirrorverse’. Its editors bid us ‘Welcome to the parallel reality that’s hiding in plain sight’. […]

[Some physicists] claim that neutrons [are] flitting between parallel universes. They admit that the chances of proving this are ‘low’, or even ‘zero’, but it doesn’t really matter. When it comes to grabbing attention, inviting that all-important click, or purchase, speculative metaphysics wins hands down.

These theories are based on the notion that our Universe is not unique, that there exists a large number of other universes that somehow sit alongside or parallel to our own. For example, in the so-called Many-Worlds interpretation of quantum mechanics, there are universes containing our parallel selves, identical to us but for their different experiences of quantum physics. These theories are attractive to some few theoretical physicists and philosophers, but there is absolutely no empirical evidence for them. And, as it seems we can’t ever experience these other universes, there will never be any evidence for them. As Broussard explained, these theories are sufficiently slippery to duck any kind of challenge that experimentalists might try to throw at them, and there’s always someone happy to keep the idea alive.

Is this really science? The answer depends on what you think society needs from science. In our post-truth age of casual lies, fake news and alternative facts, society is under extraordinary pressure from those pushing potentially dangerous antiscientific propaganda – ranging from climate-change denial to the anti-vaxxer movement to homeopathic medicines. I, for one, prefer a science that is rational and based on evidence, a science that is concerned with theories and empirical facts, a science that promotes the search for truth, no matter how transient or contingent. I prefer a science that does not readily admit theories so vague and slippery that empirical tests are either impossible or they mean absolutely nothing at all.

As always, a single quote doesn’t do the argument justice, so go read the article. But I think this captures the basic argument: multiverse theories are bad, because they’re untestable, and untestable science is pseudoscience.

Many great people, both philosophers of science and practicing scientists, have already discussed the problems with this point of view. But none of them lay out their argument in quite the way that makes the most sense to me. I want to do that here, without claiming any originality or special expertise in the subject, to see if it helps convince anyone else.

II.

Consider a classic example: modern paleontology does a good job at predicting dinosaur fossils. But the creationist explanation – Satan buried fake dinosaur fossils to mislead us – also predicts the same fossils (we assume Satan is good at disguising his existence, so that the lack of other strong evidence for Satan doesn’t contradict the theory). What principles help us realize that the Satan hypothesis is obviously stupid and the usual paleontological one more plausible?

One bad response: paleontology can better predict characteristics of dinosaur fossils, using arguments like “since plesiosaurs are aquatic, they will be found in areas that were underwater during the Mesozoic, but since tyrannosaurs are terrestrial, they will be found in areas that were on land”, and this makes it better than the Satan hypothesis, which can only retrodict these characteristics. But this isn’t quite true: since Satan is trying to fool us into believing the modern paleontology paradigm, he’ll hide the fossils in ways that conform to its predictions, so we will predict plesiosaur fossils will only be found at sea – otherwise the gig would be up!

A second bad response: “The hypothesis that all our findings were planted to deceive us bleeds into conspiracy theories and touches on the problem of skepticism. These things are inherently outside the realm of science.” But archaeological findings are very often deliberate hoaxes planted to deceive archaeologists, and in practice archaeologists consider and test that hypothesis the same way they consider and test every other hypothesis. Rule this out by fiat and we have to accept Piltdown Man, or at least claim that the people arguing against the veracity of Piltdown Man were doing something other than Science.

A third bad response: “Satan is supernatural and science is not allowed to consider supernatural explanations.” Fine then, replace Satan with an alien. I think this is a stupid distinction – if demons really did interfere in earthly affairs, then we could investigate their actions using the same methods we use to investigate every other process. But this would take a long time to argue well, so for now let’s just stick with the alien.

A fourth bad response: “There is no empirical test that distinguishes the Satan hypothesis from the paleontology hypothesis, therefore the Satan hypothesis is inherently unfalsifiable and therefore pseudoscientific.” But this can’t be right. After all, there’s no empirical test that distinguishes the paleontology hypothesis from the Satan hypothesis! If we call one of them pseudoscience based on their inseparability, we have to call the other one pseudoscience too!

A naive Popperian (which maybe nobody really is) would have to stop here, and say that we predict dinosaur fossils will have such-and-such characteristics, but that questions like that process that drives this pattern – a long-dead ecosystem of actual dinosaurs, or the Devil planting dinosaur bones to deceive us – is a mystical question beyond the ability of Science to even conceivably solve.

I think the correct response is to say that both theories explain the data, and one cannot empirically test which theory is true, but the paleontology theory is more elegant (I am tempted to say “simpler”, but that might imply I have a rigorous mathematical definition of the form of simplicity involved, which I don’t). It requires fewer other weird things to be true. It involves fewer other hidden variables. It transforms our worldview less. It gets a cleaner shave with Occam’s Razor. This elegance is so important to us that it explains our vast preference for the first theory over the second.

A long tradition of philosophers of science have already written eloquently about this, summed up by Sean Carroll here:

What makes an explanation “the best.” Thomas Kuhn ,after his influential book The Structure of Scientific Revolutions led many people to think of him as a relativist when it came to scientific claims, attempted to correct this misimpression by offering a list of criteria that scientists use in practice to judge one theory better than another one: accuracy, consistency, broad scope, simplicity, and fruitfulness. “Accuracy” (fitting the data) is one of these criteria, but by no means the sole one. Any working scientist can think of cases where each of these concepts has been invoked in favor of one theory or another. But there is no unambiguous algorithm according to which we can feed in these criteria, a list of theories, and a set of data, and expect the best theory to pop out. The way in which we judge scientific theories is inescapably reflective, messy, and human. That’s the reality of how science is actually done; it’s a matter of judgment, not of drawing bright lines between truth and falsity or science and non-science. Fortunately, in typical cases the accumulation of evidence eventually leaves only one viable theory in the eyes of most reasonable observers.

The dinosaur hypothesis and the Satan hypothesis both fit the data, but the dinosaur hypothesis wins hands-down on simplicity. As Carroll predicts, most reasonable observers are able to converge on the same solution here, despite the philosophical complexity.

III.

I’m starting with this extreme case because its very extremity makes it easier to see the mechanism in action. But I think the same process applies to other cases that people really worry about.

Consider the riddle of the Sphinx. There’s pretty good archaeological evidence supporting the consensus position that it was built by Pharaoh Khafre. But there are a few holes in that story, and a few scattered artifacts suggest it was actually built by Pharaoh Khufu; a respectable minority of archaeologists believe this. And there are a few anomalies which, if taken wildly out of context, you can use to tell a story that it was built long before Egypt existed at all, maybe by Atlantis or aliens.

So there are three competing hypotheses. All of them are consistent with current evidence (even the Atlantis one, which was written after the current evidence was found and carefully adds enough epicycles not to blatantly contradict it). Perhaps one day evidence will come to light that supports one above the others; maybe in some unexcavated tomb, a hieroglyphic tablet says “I created the Sphinx, sincerely yours, Pharaoh Khufu”. But maybe this won’t happen. Maybe we already have all the Sphinx-related evidence we’re going to get. Maybe the information necessary to distinguish among these hypotheses has been utterly lost beyond any conceivable ability to reconstruct.

I don’t want to say “No hypothesis can be tested any further, so Science is useless to us here”, because then we’re forced to conclude stupid things like “Science has no opinion on whether the Sphinx was built by Khafre or Atlanteans,” whereas I think most scientists would actually have very strong opinions on that.

But what about the question of whether the Sphinx was built by Khafre or Khufu? This is a real open question with respectable archaeologists on both sides; what can we do about it?

I think the answer would have to be: the same thing we did with the Satan vs. paleontology question, only now it’s a lot harder. We try to figure out which theory requires fewer other weird things to be true, fewer hidden variables, less transformation of our worldview – which theory works better with Occam’s Razor. This is relatively easy in the Atlantis case, and hard but potentially possible in the Khafre vs. Khufu case.

(Bayesians can rephrase this to: given that we have a certain amount of evidence for each, can we quantify exactly how much evidence, and what our priors for each should be. It would end not with a decisive victory of one or the other, but with a probability distribution, maybe 80% chance it was Khafre, 20% chance it was Khufu)

I think this is a totally legitimate thing for Egyptologists to do, even if it never results in a particular testable claim that gets tested. If you don’t think it’s a legitimate thing for Egyptologists to do, I have trouble figuring out how you can justify Egyptologists rejecting the Atlantis theory.

(Again, Bayesians would start with a very low prior for Atlantis, and assess the evidence as very low, and end up with a probability distribution something like Khafre 80%, Khufu 19.999999%, Atlantis 0.000001%)

IV.

How does this relate to things like multiverse theory? Before we get there, one more hokey example:

Suppose scientists measure the mass of one particle at 32.604 units, the mass of another related particle at 204.897 units, and the mass of a third related particle at 145173.870 units. For a while, this is just how things are – it seems to be an irreducible brute fact about the universe. Then some theorist notices that if you set the mass of the first particle as x, then the second is 2πx and the third is 4/3 πx^3. They theorize that perhaps the quantum field forms some sort of extradimensional sphere, the first particle represents a radius of a great circle of the sphere, the second the circumference of the great circle, and the third the volume of the sphere.

(please excuse the stupidity of my example, I don’t know enough about physics to come up with something that isn’t stupid, but I hope it will illustrate my point)

In fact, imagine that there are a hundred different particles, all with different masses, and all one hundred have masses that perfectly correspond to various mathematical properties of spheres.

Is the person who made this discovery doing Science? And should we consider their theory a useful contribution to physics?

I think the answer is clearly yes. But consider what this commits us to. Suppose the scientist came up with their Extradimensional Sphere hypothesis after learning the masses of the relevant particles, and so it has not predicted anything. Suppose the extradimensional sphere is outside normal space, curled up into some dimension we can’t possibly access or test without a particle accelerator the size of the moon. Suppose there are no undiscovered particles in this set that can be tested to see if they also reflect sphere-related parameters. This theory is exactly the kind of postempirical, metaphysical construct that the Aeon article savages.

But it’s really compelling. We have a hundred different particles, and this theory retrodicts the properties of each of them perfectly. And it’s so simple – just say the word “sphere” and the rest falls out naturally! You would have to be crazy not to think it was at least pretty plausible, or that the scientist who developed it had done some good work.

Nor do I think it seems right to say “The discovery that all of our unexplained variables perfectly match the parameters of a sphere is good, but the hypothesis that there really is a sphere is outside the bounds of Science.” That sounds too much like saying “It’s fine to say dinosaur bones have such-and-such characteristics, but we must never speculate about what kind of process produced them, or whether it involved actual dinosaurs”.

V.

My understanding of the multiverse debate is that it works the same way. Scientists observe the behavior of particles, and find that a multiverse explains that behavior more simply and elegantly than not-a-multiverse.

One (doubtless exaggerated) way I’ve heard multiverse proponents explain their position is like this: in certain situations the math declares two contradictory answers – in the classic example, Schrodinger’s cat will be both alive and dead. But when we open the box, we see only a dead cat or an alive cat, not both. Multiverse opponents say “Some unknown force steps in at the last second and destroys one of the possibility branches”. Multiverse proponents say “No it doesn’t, both possibility branches happen exactly the way the math says, and we end up in one of them.”

Taking this exaggerated dumbed-down account as exactly right, this sounds about as hard as the dinosaurs-vs-Satan example, in terms of figuring out which is more Occam’s Razor compliant. I’m sure the reality is more nuanced, but I think it can be judged by the same process. Perhaps this is the kind of reasoning that only gets us to a 90% probability there is a multiverse, rather than a 99.999999% one. But I think determining that theories have 90% probability is a reasonable scientific thing to do.

VI.

At times, the Aeon article seems to flirt with admitting that something like this is necessary:

Such problems were judged by philosophers of science to be insurmountable, and Popper’s falsifiability criterion was abandoned (though, curiously, it still lives on in the minds of many practising scientists). But rather than seek an alternative, in 1983 the philosopher Larry Laudan declared that the demarcation problem is actually intractable, and must therefore be a pseudo-problem. He argued that the real distinction is between knowledge that is reliable or unreliable, irrespective of its provenance, and claimed that terms such as ‘pseudoscience’ and ‘unscientific’ have no real meaning.

But it always jumps back from the precipice:

So, if we can’t make use of falsifiability, what do we use instead? I don’t think we have any real alternative but to adopt what I might call the empirical criterion. Demarcation is not some kind of binary yes-or-no, right-or-wrong, black-or-white judgment. We have to admit shades of grey. Popper himself was ready to accept this, [saying]:

“The criterion of demarcation cannot be an absolutely sharp one but will itself have degrees. There will be well-testable theories, hardly testable theories, and non-testable theories. Those which are non-testable are of no interest to empirical scientists. They may be described as metaphysical.”

Here, ‘testability’ implies only that a theory either makes contact, or holds some promise of making contact, with empirical evidence. It makes no presumptions about what we might do in light of the evidence. If the evidence verifies the theory, that’s great – we celebrate and start looking for another test. If the evidence fails to support the theory, then we might ponder for a while or tinker with the auxiliary assumptions. Either way, there’s a tension between the metaphysical content of the theory and the empirical data – a tension between the ideas and the facts – which prevents the metaphysics from getting completely out of hand. In this way, the metaphysics is tamed or ‘naturalised’, and we have something to work with. This is science.

But as we’ve seen, many things we really want to include as science are not testable: our credence for real dinosaurs over Satan planting fossils, our credence for Khafre building the Sphinx over Khufu or Atlanteans, or elegant patterns that explain the features of the universe like the Extradimensional-Sphere Theory.

The Aeon article is aware of Carroll’s work – which, along with the paragraph quoted in Section II above, includes a lot of detailed Bayesian reasoning encompassing everything I’ve discussed. But the article dismisses it in a few sentences:

Sean Carroll, a vocal advocate for the Many-Worlds interpretation, prefers abduction, or what he calls ‘inference to the best explanation’, which leaves us with theories that are merely ‘parsimonious’, a matter of judgment, and ‘still might reasonably be true’. But whose judgment? In the absence of facts, what constitutes ‘the best explanation’?

Carroll seeks to dress his notion of inference in the cloth of respectability provided by something called Bayesian probability theory, happily overlooking its entirely subjective nature. It’s a short step from here to the theorist-turned-philosopher Richard Dawid’s efforts to justify the string theory programme in terms of ‘theoretically confirmed theory’ and ‘non-empirical theory assessment’. The ‘best explanation’ is then based on a choice between purely metaphysical constructs, without reference to empirical evidence, based on the application of a probability theory that can be readily engineered to suit personal prejudices.

“A choice between purely metaphysical constructs, without reference to empirical evidence” sounds pretty bad, until you realize he’s talking about the same reasoning we use to determine that real dinosaurs are more likely than Satan planting fossils.

I don’t want to go over the exact ways in which Bayesian methods are subjective (which I think are overestimated) vs. objective. I think it’s more fruitful to point out that your brain is already using Bayesian methods to interpret the photons striking your eyes into this sentence, to make snap decisions about what sense the words are used in, and to integrate them into your model of the world. If Bayesian methods are good enough to give you every single piece of evidence about the nature of the external world that you have ever encountered in your entire life, I say they’re good enough for science.

Or if you don’t like that, you can use the explanation above, which barely uses the word “Bayes” at all and just describes everything in terms like “Occam’s Razor” and “you wouldn’t want to conclude something like that, would you?”

I know there are separate debates about whether this kind of reasoning-from-simplicity is actually good enough, when used by ordinary people, to consistently arrive at truth. Or whether it’s a productive way to conduct science that will give us good new theories, or a waste of everybody’s time. I sympathize with some these concerns, though I am nowhere near scientifically educated enough to have an actual opinion on the questions at play.

But I think it’s important to argue that even before you describe the advantages and disadvantages of the complicated Bayesian math that lets you do this, something like this has to be done. The untestable is a fundamental part of science, impossible to remove. We can debate how to explain it. But denying it isn’t an option.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

438 Responses to Building Intuitions On Non-Empirical Arguments In Science

  1. rastlin says:

    Somewhat tangential but probably of interest to Scott and readers of the blog: the sphere example in section IV reminds me of the Koide formula. If a, b, c are the masses of the electron, muon, and tau, then

    (a + b + c) / [sqrt(a) + sqrt(b) + sqrt(c)]^2 = 2/3

    with an error less than one part in 100,000. Reception from the scientific community about the significance of this has been…mixed, to say the least.

    • knzhou says:

      One thing working against the Koide formula is that typically, a theory is specified by fundamental constants, but the measured constants (i.e. the ones appearing in the formula) are shifted by interactions between the fields, by a process called RG flow. If you come up with an identity that isn’t generically stable under RG flow, then it’s not easy to create a theory that reproduces it. Of course it’s possible, but it inevitably ends up being complicated, which makes it unappealing.

      There are a lot of outstanding numerological “hints” like this. Numerology has historically led to some incredible successes for physics, and there’s absolutely nothing wrong with working on it. But like all speculative hints it’s no guarantee of truth.

      • koreindian says:

        Would you mind listing some more examples of numerological successes in physics? This is interesting and useful for me, and googling “numerology and physics” predictably led to not the sort of results I wanted :).

        • knzhou says:

          The problem is just one of connotation. “Numerology” is usually used as an insult, so when the exact same procedure works, it isn’t called numerology.

          Yet literally the two greatest examples of scientific discovery in history, Newton’s gravity and Maxwell’s electromagnetic waves, came out of noting that two numbers coincidentally matched. The structure of the periodic table was also deduced by pure numerology, and became critical for the development of quantum mechanics later on. Later on the same thing repeated in nuclear physics, where people deduced nuclear structure from what they literally called “magic numbers”, as you can google.

          There is nothing inherently wrong with numerology because it is just a form of pattern recognition, which is the foundation science in general is based on.

    • Akhorahil says:

      As always, Context of Discovery vs. Context of Justification. It’s at least casually interesting that the numbers come out this way, but until you can do something with it, it’s just a curiosity.

      For instance, let’s say that you find out that there is no measurable error in the numbers. Then you might make the prediction that future, more accurate measurements won’t find any measurable errors either. If that turns out to be the case (especially several times in a row), it’s time to go “hmm”, but even then, it’s just something that might inspire an actual scientific theory that explains this fact by means of some kind of reasons.

    • Michael_druggan says:

      ” with an error less than one part in 100,000.”

      You’re gonna have to add a few more zeros there before I’m impressed. There’s a lot of formulas you can make with 3 numbers so one of them is bound to work out to something close to a round number

      • A1987dM says:

        Well, the formula continues to be accurate to within measurement uncertainties even now that measurement uncertainties are ~25 times smaller than when the formula was first published, so there is that.

    • deciusbrutus says:

      Are the masses of those particles even consistent to one part in a hundred thousand?

  2. Bugmaster says:

    I am somewhat confused by this article, because the solutions to these seemingly insurmountable problems are not only obvious (IMO), but also pretty old.

    What principles help us realize that the Satan hypothesis is obviously stupid and the usual paleontological one more plausible? … It gets a cleaner shave with Occam’s Razor.

    Yes, that’s exactly it. Our two competing theories are not merely “long-dead ecosystem of actual dinosaurs” vs. “Satan tricked us”, but “long-dead ecosystem of actual dinosaurs” vs “Satan tricked us in a very specific way that looks exactly like a long-dead ecosystem of actual dinosaurs“. If we want to predict anything about the world, we can use either model, and we’d get exactly the same results — because Satan does absolutely no useful work. If you wanted to, you could add even more terms to the model, e.g. “Satan did not act alone, he had help from Loki”. Philosophically speaking, such scenarios are really fun to speculate about; but scientifically speaking they are totally pointless. Science doesn’t say, “Satan doesn’t exist”; it merely says, “Believe in Satan if you want, I don’t care, here’s a formula that predicts where the next dinosaur bone will be”.

    The dinosaur example is radically different from the particle one:

    Is the person who made this discovery doing Science? And should we consider their theory a useful contribution to physics?

    The answer is empathically “yes”, because our two models are “the distribution of particle sizes is arbitrary”, vs. “the distribution of particle sizes follows a specific mathematical relationship”. This is something we can actually test. If we measure the sizes of some previously un-measured particles, they’ll either fall into this mathematical progression, or they won’t. Until we do so, we are free to use either model in our calculations. Once we’ve done the measurements, we are compelled to accept the hypersphere model (at least provisionally) or reject it.

    So, is there anything about the many-worlds explanation that we can use to test it ? Does it do anything, or is it a useless layabout just like Satan ? I am not a physicist, so I don’t know the answer. But if the answer is “yes”, at least in principle, then many-worlds is a legitimate hypothesis, regardless of whether or not we have the evidence right at this very moment. But if the answer is “no”, in principle, then what’s the point in talking about it, other than perhaps pure entertainment ?

    • Faza (TCM) says:

      This.

      For cases like the Satan example, I like to use the term: Zero Epistemic Value Object (ZEVO). Assuming our “Satan model” will give exactly the same results as our “non-Satan model”, Satan is a ZEVO. The identifying feature of ZEVOs is that you can add or subtract as many as you like and it won’t affect the predictions of your model in any way (just like adding or subtracting any number of zero values won’t affect the value of whatever it is you’re adding them to/subtracting them from).

      What I feel needs pointing out, however, is that neither Satan nor Atlanteans are really ZEVOs – in a global sense. Just so long as we’re looking at local-Satan (who messes with dinosaur fossils and only dinosaur fossils, in a very specific way that looks exactly the same as the non-Satan prediction) or local-Atlanteans (whose sole contribution to existence was building the Sphinx) you can believe these explanations, I suppose, if it makes you happy (they won’t affect your predictions).

      However, the concept of “Satan” goes far beyond just messing about with dinosaur fossils.

      A world where Satan exists ought to be fundamentally different from a world where Satan does not exist (that’s the Biblical claim, anyway) and these differences ought to be fundamentally testable. Just one example: according to folklore, Satan or his minions can be summoned and interacted with. It would be a really dumb thing to do, but would conclusively prove the existence of Satan, or “Satan”-compatible entity – if successful.

      Same goes for the Atlantis hypothesis. In a world where Atlantis exists, we would expect to find a variety of evidence to support the hypothesis. According to Plato, the Atlanteans were at war with Athens, so we should expect that evidence for Atlantis should be no older than the earliest evidence for Athens. Similarly, Plato claims that the Atlanteans conquered a considerable area in Africa and Europe, so we should have tons of candidate sites for evidence to be preserved.

      If our models of Satan or Atlantis predict we should be finding evidence of their existence and we aren’t finding any, this is a strong indication that our models are wrong (at the very least: have little predictive value). If we try to preserve them nonetheless, by claiming that they cannot be predictive (and are, therefore, unfalsifiable) we reduce those models to global ZEVOs – we postulate that these entities will have zero epistemic value in all circumstances.

      The Occam approach to ZEVOs is: given that the presence or absence of ZEVOs will not affect your predictions in any way, you have no reason to include any identified ZEVOs in your model. Moreover, given that you can define any number ZEVOs into existence (all that’s needed is a name and a claim that it won’t affect the results in any way), and that you have no decision algorithm for excluding any particular ZEVO (including those you haven’t thought of), if you’re going to exclude any ZEVOs, you might as well exclude all of them. No ZEVO is ever necessary for your model, by definition.

      • Jonluw says:

        For cases like the Satan example, I like to use the term: Zero Epistemic Value Object (ZEVO). Assuming our “Satan model” will give exactly the same results as our “non-Satan model”, Satan is a ZEVO.

        I like this term.
        But just to be clear, because I couldn’t quite tell from your post which way you were leaning with regards to Bugmaster’s comment on the many worlds interpretation:

        Essentially, the central claim of the many worlds interpretation of quantum mechanics is that the Copenhagen interpretation is wrong because wavefunction collapse is a ZEVO.

        • Ketil says:

          Couldn’t you also say that the heliocentric universe is a ZEVO compared to the geocentric universe? Everything would look the same, it’s just a matter of which bit you pin as the origin. The difference is that a lot of the equations get very complicated, so among the models with equal explanatory power, we give a nod to Occam, and choose the one that makes the math simpler.

          • Jonluw says:

            One property of our equations in physics is that they are independent of reference frame.
            As such, a “center of the universe” is a ZEVO (because it defines a “correct” reference frame, which adds no explanatory power to our equations).

            The difference between heliocentrism and geocentrism is that the former is more elegant, but neither one introduces new “objects” of zero epistemic value compared to the other.

          • Faza (TCM) says:

            Couldn’t you also say that the heliocentric universe is a ZEVO compared to the geocentric universe?

            No.

            For the geocentric solar system to be equivalent to a heliocentric one, gravity would have to work differently or the masses of the various objects involved would have to be different. You can’t have “everything looking the same” in both cases.

            It’s another “local-Satan” issue. Or, because I’m something of a closet Daoist: “When you look for similarities, everything is the same; when you look for differences, everything is different.”

          • Akhorahil says:

            Ketil, that might have been the case for the Copernican universe – it had roughly about the same complexity and explanatory powers as the geocentric one. Although even then, it made predictions about different observations – for instance, it predicted that stellar parallax would occur and that, since we can’t measure any with our current instruments, that means both that we will be able to with better instruments and that the stars are far further away than we thought.

            Already with Kepler and especially with Newton, ZEVO is no longer the case, because while you could form a geometrical geocentric model, it no longer makes any sense.

          • sclmlw says:

            The real difference between the two was that a geocentric solar system was incapable of predicting the movements of planets and other satellites. It had to update its models every time new observations were made, and as such was entirely retrospective. It is a clear scientific principle that we should prefer a model that is predictive of observed phenomena over one that must be updated with every new observation. This is the fundamental difference between the Satan/dinosaur example above. If archaeologists hypothesize that dinosaurs with aquatic bone structures should be living near water they can go test whether there is evidence for that water. You might say “Satan put them near water”, but it didn’t predict anything as the analysis was all retrospective. Therefore we clearly prefer the predictive model over the retrospective one.

        • Faza (TCM) says:

          Essentially, the central claim of the many worlds interpretation of quantum mechanics is that the Copenhagen interpretation is wrong because wavefunction collapse is a ZEVO.

          I suppose that would depend on what kind of many worlds interpretation we are talking about. If it’s something like Hawking’s:

          All that one does, really, is to calculate conditional probabilities—in other words, the probability of A happening, given B. I think that that’s all the many worlds interpretation is.

          I’m totally down with it and I can see how not having the collapse postulate may be considered superior.

          Then again, I’m perfectly happy to accept “wave function collapse” as a name for “that thing that quantum systems do, when you try to interact with them”.

          My bigger concern is taking a mathematical description of the world (which works) and making ontological claims based on that description. For example, the principle of least action is good physics, but doesn’t really work as an ontological principle.

          • Jonluw says:

            I’m not familiar with Hawking’s interpretation of the matter, but the MWI I am talking about is Everett’s interpretation.

            Then again, I’m perfectly happy to accept “wave function collapse” as a name for “that thing that quantum systems do, when you try to interact with them”.

            The problem is that you can’t express this formally in the theory, so the Copenhagen interpretation isn’t a complete theory, because of this hand-waving.
            Specifically, you would need to define what “you” means, and why interacting with “you” causes wavefunctions to collapse, when interacting with other physical objects, like electrons, does not cause wavefunctions to collapse.

          • acymetric says:

            when interacting with other physical objects, like electrons, does not cause wavefunctions to collapse.

            Not being an expert on any of this I honestly don’t know what the answer would be from someone who is, but it seems like the answer would be something like “all those interactions do cause wavefunctions to collapse”.

          • Faza (TCM) says:

            I don’t intend to defend the Copenhagen interpretation – not least because I’m not sufficiently knowledgable and from what I understand MWI is more elegant.

            The statement should be read as meaning no more and no less than: “If someone were to propose ‘wave function collapse’ as a name for a set of behaviours of a quantum system under observation, I would have no problem with it” – if only because it doesn’t really matter what you call things.

            It does matter – if you postulate that something actually exists – whether that thing really exists or not (in the sense of affecting predictive capability). My understanding is that Hawking isn’t doing the maths any differently from Everett, but simply rejecting the metaphysical component:

            Some people overlay it with a lot of mysticism about the wave function splitting into different parts. But all that you’re calculating is conditional probabilities.

          • Jonluw says:

            @acymetric
            (I don’t understand how this comment system works where apparently reply threads have a maximum depth)
            Interactions between for instance electrons do not cause the wavefunction to collapse. Neither in Copenhagen nor in any other interpretation. Experimental evidence supports that fact, so it’s not a point of controversy.

            However, in Copenhagen, interactions with humans are special because we do not merely interact with particles, we “observe” them (the term “observe” is not well-defined, hence the scare quotes). Essentially, for Copenhagen to be a self-consistent theory it would need to introduce some variable akin to “consciousness” to account for why some interactions are “observations” and why others are merely “interactions”.

          • kaathewise says:

            There is nothing special about wave function collapse, and it certainly is not about humans or observers. It also is not a real physical thing any more than a closed system is in classical mechanics.

            When we think about wave functions we usually separate out a very small piece of the Universe, e.g. several particles, and assume that they mostly don’t interact with the outside world, naturally it’s only a model. While this assumption is true, the wave function of these particles behaves finely, but when they actually do interact with the outside of the model, it suffers so called collapse.

            If we imagine a wave function of the ever increasingly complex system of particles that would eventually encompass all of the Universe, we can talk about the universal wave function that governs the state of the whole Universe.

            This wave function would be:

            1. Impractical to use.
            2. Never collapse.
            3. Have our usual wave functions as its projections on the subspaces describing the state of limited sets of particles. The collapse of these wave functions is simply a mathematical inconvenience of such projections.

            So the reason behind wave function collapse is not physical, it is caused by us drawing convenient but physically arbitrary boundaries between the particles we research and the outside world.

          • JohnWittle says:

            I feel like you guys are missing the point.

            If you set up a computer program to simulate a universe, and you used the Schrodinger equation and nothing else to determine the evolution of the system from one state to the next, it would simulate multiple branches.

            That is, even after you open the box, both the branch containing the dead cat and the branch containing the live cat would continue to be computed separately, no longer interacting with one another.

            In order for one of the branches to be chosen as real, and the other to be chosen as not real, you must add additional code to your simulation, that stops computing one of the branches. Because on its own Schroedinger’s equation continues to compute those other branches indefinitely.

            Indeed, it depends on those branches still existing, as there are potential small-scale effects on one branch depending on what is happening in immediately adjacent branches. If the immediately adjacent branches were not being computed, there would be no ‘quantum effects’ which distinguished our universe from a classical one, the whole point of decoherence is that the behavior of a single elementary particle changes depending on whether or not it is being smeared across multiple branches or not, and we can measure that change.

            This assumes a particular formulation of Occam’s razor called kolmogorov complexity, aka minimum message length, where ‘complexity’ is measured as lines of code, not RAM or CPU usage. I think that is a safe bet. Over and over throughout history, we have badly underestimated the CPU and RAM usage of the universe. When it was first suggested that the stars might be like the sun except immeasurably far away, people were incredulous that space could be that big. When it was suggested that galaxies were perhaps a similar order of magnitude increase in scale, it was again regarded as absurd by the natural philosophy community. When it was suggested that perhaps physics, far from being computationally tractible, was actually the most intractible thing ever conceived, so tedious to compute that just solving the Schrodinger equation to predict the behavior of a single hydrogen atom, sitting in empty space not interacting with anything, is almost more than our biggest supercomputers could handle, people thought that was crazy, absurd by default.

            It seems pretty damn clear that God, or Tao, or the metaphysical prereality set, or whatever vocabulary you want to use to describe the answer to “why is our physics the way it is instead of something else”, has got as much RAM and CPU as it could ever need, but it is extremely stingy when it comes to lines of code. That’s why all of the formal treatments judge the complexity of a mathematical object by the number of terms in the most-compressed equation which describes that object. Thinking that smaller universes are simpler than bigger universes is like thinking smaller spheres are simpler than bigger spheres. But an elipsoid actually is more complicated than a sphere, requiring more math to describe, more code to simulate.

            By my accounting, MWI is what you would get if you had a physics emulator running Schrodinger’s equation, and Copenhagen is what you would get if you had a physics emulator running Schroedinger plus some code to prune all but one branch, except leave tiny pieces of adjacent branches prior to collapse so that you still get decoherence effects for small enough systems. Sure, the latter program has the advantage that it doesn’t spiral out of control in CPU and RAM usage, and it seems like exactly the sort of hack that a human being would come up with if they were going to run Schrodinger’s equation on their home computer. But it adds a lot of extra code. The collapse postulate on its own is bound to be a lot more complicated than just Schroedinger’s equation on its own. Probably increase the size of the program from like two or three lines of code to 50 or 60.

            That is why many worlds is a more elegant hypothesis than copenhagen.

            You can disagree with this proposition, but you should at least understand what it is we are saying.

          • viVI_IViv says:

            However, in Copenhagen, interactions with humans are special because we do not merely interact with particles, we “observe” them (the term “observe” is not well-defined, hence the scare quotes). Essentially, for Copenhagen to be a self-consistent theory it would need to introduce some variable akin to “consciousness” to account for why some interactions are “observations” and why others are merely “interactions”.

            Not really. Quantum measurement is broadly defined as any interaction that causes macroscopic effects. There is no need for a human to actually observe any of these effects.

          • Viliam says:

            @JohnWittle

            I suspect that intuition “whatever computer is simulating our universe, it cannot be that much stronger than the strongest possible computer in our universe… and the computer simulating quantum physics without collapse would need exponentially more time and memory” is too strong to overcome.

            From my perspective, it seems too rash to assume that constraints in our universe also apply to the simulating universe. Would hypothetical sentient people simulated in my PC also conclude that my universe cannot have a resolution vastly higher than 1920×1080?

            To be honest, I don’t want to dismiss this line of reasoning completely either; maybe there are some constraints that apply across universes. I am just saying we currently have no evidence for it. The realization of how much computing power our fully quantum simulation would require simply feels absurd (to someone whose instincts have been adapted to our universe).

          • viVI_IViv says:

            I suspect that intuition “whatever computer is simulating our universe, it cannot be that much stronger than the strongest possible computer in our universe… and the computer simulating quantum physics without collapse would need exponentially more time and memory” is too strong to overcome.

            Or at least it would have to be a quantum computer, but this only pushes the problem one turtle below.

          • Jonluw says:

            @viVI_IViv
            Quantum measurement is not well-defined in the Copenhagen interpretation, is my point.
            Interacting with a human causes the wavefunction to collapse. Interacting with an electron does not cause the wavefunction to collapse. As you say, there’s not necessarily anything special about humans. The transition from no collapse to collapse could happen anywhere along the spectrum from electron to human. However, the point is there are two classes of interactions in Copenhagen: Those which cause collapse, and those which do not (I just drew the line at humans arbitrarily for the sake of brevity).

            Problem is, Copenhagen does not at all deal with why there are these two classes of interactions, or how it works. The well-defined math only handles interactions without collapse.

          • Jonluw says:

            @Faza (TCM)
            I agree it would be fine to call some behavior of quantum systems “wave function collapse” if it was well-defined. The problem with Copenhagen is that the supposed phenomenon of wave function collapse is precisely not well-defined.

            Regarding Hawking, I don’t know the details of his objection, but from your quote it seems like he denies the existence of “other” branches of the wave function. Which strikes me as sort of strange, considering we know experimentally that both “branches” exist in microscopic entanglement scenarios.

          • viVI_IViv says:

            Problem is, Copenhagen does not at all deal with why there are these two classes of interactions, or how it works. The well-defined math only handles interactions without collapse.

            But in the many-worlds interpretation if you want to actually predict observations you still have to use Born’s rule for some interactions and not others: an electron can simultaneously pass through two slits and interfere with itself, a cat can’t.

            Many-worlds assumes that Born’s rule could be derived as a limit for a large number of microscopic interactions, but it doesn’t actually show how. Copenhagen is just minimalist and doesn’t assume anything about where Born’s rule comes from.

          • JohnWittle says:

            @Viliam

            Sure, my point is less about speculating about extrauniversal aliens. It’s more about the notion that, empirically, the best formulation of Occam’s Razor is the one where we determine complexity by counting lines of code, not instantiated objects or clock cycles. A universe with 30,000 planets and a universe with 1 planet have very similar complexity if they have the same laws of physics. The additional planets are not considered ‘extra parsimonous objects’; the laws of physics are.

            By the same token, Schroedinger is simpler than Schroedinger + Collapse, because the extra branches do not count as extra complexity the way the extra code does.

          • sovietKaleEatYou says:

            I want to signal-boost JohnWittle’s comments, which are one of the best explanations that I’ve seen of the need for multiverses in quantum: namely, in nonrelativistic quantum mechanics when you work with the Schroedinger equation, many worlds are part of the picture, period. Our reality is then a choice of a branch of universes.

            Indeed, it depends on those branches still existing, as there are potential small-scale effects on one branch depending on what is happening in immediately adjacent branches. If the immediately adjacent branches were not being computed, there would be no ‘quantum effects’ which distinguished our universe from a classical one, the whole point of decoherence is that the behavior of a single elementary particle changes depending on whether or not it is being smeared across multiple branches or not, and we can measure that change.

            There are, to be sure, other points of view on quantum. But every physical computation involving the Schroedinger equation (and these are common and useful in experiment) presupposes the existence of a multiverse. Saying these are meaningless is fine just like saying complex numbers are meaningless is fine: we’ll never “measure” a complex quantity. Nevertheless they’re not just potentially useful, they are present in many if not most meaningful quantum-mechanical computations.

          • dionisos says:

            A universe with 30,000 planets and a universe with 1 planet have very similar complexity if they have the same laws of physics. The additional planets are not considered ‘extra parsimonous objects’; the laws of physics are.

            .

            This is tangential to the main debate, but the additional planets could add a lot of complexity if you can’t derive them from the law of physics.

            They could also remove complexity in some case. (in the same way the library of Babel have less complexity than its books)

          • Aapje says:

            @JohnWittle

            Isn’t the need for multiple branches simply a consequence of a desire to predict possible consequences of unobserved and/or unpredictable ‘decisions,’ with a lack of information about those ‘decisions.’ Here I define a decision as a specific outcome in a case where there are, from our perspective, multiple possible outcomes.

            For example, imagine us leaving a robot arm in a room with a wooden block. The robot arm runs a program that moves the block one position left or right every minute, for 10 minutes.

            If a bayesian predictor wants to use brute force to predict the likelihood of the block being in certain positions, based on the assumption that the robot arm makes random moves, then she needs to calculate the outcomes of 2^10 moves/branches. This is also true if there is an observer in the room with the robot arm, who doesn’t give information to the predictor outside the room, even though that observer sees exactly the choices being made and knows the likelihood of the outcome after 10 minutes with 100% certainty, with no need to track 2^10 moves/branches, but merely 10 moves, in 1 branch.

            Now imagine that the robot arm doesn’t actually make random decisions, but moves the block left if it is slightly left already within its position or right if it is slightly right in its position, where these variations are caused by very minor environmental influences, that we are incapable of observing. A predictor who can measure the environmental influences with infinite precision could step out of the room and predict the outcome of the moves with 100% certainty by deciding for each of the 10 decisions, what the only possible ‘decision’ is. The reason why the aforementioned bayesian predictor can’t use this method, but has to fall back on probabilities, is because her mediocre observational skills create uncertainty. The uncertainty is not a feature of the decisions made by the robot, but is a feature of the limited observational skills of the bayesian predictor.

            In other words, the multi-verse view fits the experience of the bayesian predictor, much more than the experience of the observer and not at all the experience of the predictor with perfect measurements of the environmental influences. So if the multi-verse explanation’s usefulness doesn’t depend on the ‘decisions’ that the universe makes, but on the observational capacity of the observer/predictor, then it seems to me that the multi-verse doesn’t necessarily describe the universe, but rather, it describes the consequence of limited observational capacity.

          • sovietKaleEatYou says:

            @Aapje there are two things that are not quite correct with what you said, one of which is interesting. The “boring” wrong thing is that the semantics of what is and isn’t actually random are irrelevant unless you have access to at least some information that invalidates the hypothesis “this process is random to the best of my understanding”. An impossibly complicated decision algorithm with no easily computable biases is indistinguishable from a random algorithm: indeed, as you say yourself, even for a truly random algorithm one possible decision algorithm is for God to predict, with prior knowledge of the outcome, the next step as it is meant to happen. The whole point of probability is to abstract away such impossible knowledge (and there is no difference here between the Bayesian and the frequentist unless you have a strong reason to believe in unusual priors). Sometimes it turns out that randomness is not truly random, but I don’t see how anything systematic can be deduced from this by waving around the “Bayesian” magic wand.

            Now the interesting thing wrong is that you don’t actually have to compute 2^N “possible universes” in order to accurately sample a random decision process of the type you describe: rather, you can just run it, well, randomly, in N steps.

            This is different in QM, and the difference is subtle. Namely, in (nonrelativistic) QM the probabilities you deduce for a given universe are related in a nontrivial way to probabilities of universes “close to” but not causally related to it, and so you can’t just model the “branch of the universe choosing process” as a random decision process, without modelling all other possible universes at the same time… at least as far as we know.

            However the dependence on causally unrelated universes is highly circumscribed. There is another sense in which QM is almost the same as a random branching process, except with complex probabilities. To the best of my knowledge, there is no way to formalize this without computing out a full multiverse (in particular the Copenhagen interpretation is hopelessly doomed for any sufficiently complex quantum system), but the similarity implies that in some sense quantum randomness cannot be “too different” from conventional randomness. This is in turn related to the limitations and the advantages of quantum computers: they are, as it turns out, not that different from classical computers except in the ability to run one or two new “exotic probability” algorithms.

          • viVI_IViv says:

            By the same token, Schroedinger is simpler than Schroedinger + Collapse, because the extra branches do not count as extra complexity the way the extra code does.

            But if your universe simulator predicts actual observations it needs the code to do collapse with Born’s rule, one way or the other, unless you can reduce Born’s rule to Schrodinger equation.

          • JohnWittle says:

            @viVI_IViv

            That’s true; the Born rule is an additional parsimonous object in a way that additional planets aren’t. Schroedinger on its own does not seem to match our experiences. The natural prediction made by Schrodinger would be that the probability of experiencing a given branch is equal to the relative amplitude of that branch divided by the amplitude of all other branches. Instead, our empirical experience is that the probability of experiencing a given branch goes as the squared amplitude of that branch, relative to the squared amplitude of all other branches.

            This additional operation, where we take the squared value instead of the value, really is additional complexity, really does hurt MWI as a theory.

            But this additional complexity does not penalize MWI alone. The exact same thing is true of Copenhagen. A collapse postulate is complicated, but the simplest collapse postulate would almost certainly pick a branch based on the amplitude of that branch. A collapse postulate which must perform an additional operation, exponentiating the amplitude of all the branches prior to picking one as ‘real’, adds exactly the same amount of complexity to Copenhagen as the Born rule adds to Manyworlds. Literally the same number of bits of additional code required to specify the theory.

            So the existence of the Born rule is, indeed, a mystery, something that remains to be explained. But “Schroedinger + Born” is still far, far simpler than “Schroedinger + Collapse + Born”.

          • viVI_IViv says:

            So the existence of the Born rule is, indeed, a mystery, something that remains to be explained. But “Schroedinger + Born” is still far, far simpler than “Schroedinger + Collapse + Born”.

            I don’t see how you could have Born rule as a first principle without collapse, in my understanding they are the same thing.

            Anyway, if two theories compute the same thing, then they have the same shortest program, hence the same Kolmogorov complexity.

          • TheAncientGeeksTAG says:

            @JohnWittle

            > If you set up a computer program to simulate a universe, and you used the Schrodinger equation and nothing else to determine the evolution of the system from one state to the next, it would simulate multiple branches.

            That’s true, but if the programme isn’t telling you which ranch you are in, who is?

            If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you gave to identify the subset of bits relating to your world. That’s extra complexity which isn’t accounted for because it’s being done by hand, as it were.

            Whichever interpretation you hold to, you need some way of discarding unobserved results in order to match observation.

            To make matters worse, if you drop the constraint that the output tape has to contain correct and unambiguous predictions, MWI isn’t the simplest theory: if you allow one observer to pick out their observations from a morass of data, then the easisest way of generating data that contains any substring is a PRNG. You basically ending up proving that “everything random” is the simplest explanation.

            Basically, there are trade offs between complexity and accuracy. If an MWI simulator were as accurate as a CI simulator, it should be preferred on grounds of simplicity — but it isn’t as accurate. The minimal subset of
            calculation you need to do in order to get predict observation is in fact going to be the same whatever interpretation you hold to. Even many worlders would keep renormalising according to observed data, and discarding unoberserved data, which is to say, behaving “as if” collapse were occuring, even though they don’t interpret it that way.

          • outis says:

            @Jonluw: “collapse” is just the CI term for when things stop doing weird quantum stuff, no? But if MWI describes the same observable reality, it also has to explain why you don’t see an interference pattern when a person has to pick between two doors (or something). The MWI term will not be “collapse”, but something like “the world in which you go through door A does not interact with the world in which you go through door B”. But MWI still needs to explain (or at least describe) which worlds interact and which don’t, and it’s not clear that it can do it any better or simpler than CI can explain “collapse”.

          • JohnWittle says:

            What?

            MWI has no trouble with that. You experience both. Both copies of you exist. Two non-interacting copies.

            Claiming that many worlds can’t deal with one branch containing a human splitting into two branches containing humans, it’s like saying that classical physics can’t deal with twins. “How does physics decide which twin you are?!” It doesn’t have to, the atoms just do what the atoms do and they compute what they compute and it all adds up to normality

            Shroedinger’s equation perfectly predicts, on its own, which worlds interact with each other and which don’t. You don’t need to add anything to Schrodinger’s equation to get that.

        • viVI_IViv says:

          Essentially, the central claim of the many worlds interpretation of quantum mechanics is that the Copenhagen interpretation is wrong because wavefunction collapse is a ZEVO.

          How so? Clearly the superposed quantum state (|dead> + |alive>)/sqrt(2) before measurement (wavefunction collapse) is different that the state |dead> or the state |alive>. One could in principle design an interferometer that does cool quantum things with the superposed state but not from the basis states.

          The many-worlds interpretation has to postulate that measurement-like events cause the observable quantum degrees of freedom to become massively entangled to the unobservable thermal degrees of freedom of the environment, in a way that statistically results in the same predictions of wavefunction collapse (Born’s rule). As far as I know, nobody so far managed to convincingly derive Born’s rule from first principles under the many-worlds interpretation, which implies that, as far as we know, the many-worlds interpretation isn’t any simpler than other interpretations.

          • Kindly says:

            You can measure a quantum state in any basis. It’s convenient to build a bomb tester that measures in the basis |live>, |dud>, but you could build a bomb tester that measures in the basis (|live>+|dud>)/sqrt(2), (|live>-|dud>)/sqrt(2). Or at least, this is conceivable in principle, though maybe this is not an interaction we know how to have with photon-sensitive bombs.

            It wouldn’t be a useful bomb tester in the second basis, because the two states forming the basis don’t actually distinguish between the things we care about. There are other bases we could pick that are slightly more useful, but it’s probably not surprising that if the question you ultimately want answered is “Is this bomb live?” then the best basis to use is the |live>, |dud> basis.

            (Disclaimer: I mostly approach quantum stuff through the computing side of things, where we say that “we have a particle that can be in states |0> and |1>, which is maybe a photon, and possibly something something spin 1/2 whatever that means”, and also “these are all the physically possible operations you can include in a quantum circuit, and in practice some of these may be easier or harder to build”. My experience is limited to a single graduate class a few years ago, but I took careful notes.)

          • Jonluw says:

            How so? Clearly the superposed quantum state (|dead> + |alive>)/sqrt(2) before measurement (wavefunction collapse) is different that the state |dead> or the state |alive>. One could in principle design an interferometer that does cool quantum things with the superposed state but not from the basis states.

            I’ll try to explain. Please bear with me if I’m going over stuff you already know. If you don’t mind I’ll exchange the cat for a spin-1/2 particle. We’ll use the two spin basis states |up> and |down>. For brevity’s sake, I won’t be normalizing the states.
            Let’s begin with the part MWI and Copenhagen agree about.

            The spin state of an electron is in general any linear combination of |up> and |down>. We can engineer an experiment where an electron interacts with another electron such that the second electron ends up spinning in the same direction as the first electron.
            In other words, if the state of the first electron before the interaction is |1_up>, after the interaction the state of the system is |1_up>|2_up>. And vice versa if the first electron starts in the spin-down state.

            What if the first electron is in the state |1_up> + |1_down> ?
            Then, after the interaction the state of the system is |1_up>|2_up> + |1_down>|2_down> .
            The system is now in an entangled state.
            In MWI, we would say there are two branches of the wave function: one where both electrons are spin-up, and one where they are spin-down (with some technical caveats we will ignore at the moment).
            It is important to note that both “branches” are real. It is not the case that both particles are either spin-up or spin-down. The entangled system is in a superposition where the particles are both up and both down at the same time.
            This is not a point of contention. Copenhagen and MWI agree about this description of the state, and the Quantum Eraser experiment demonstrates that the superposition is real.

            Now let’s get to the point where MWI and Copenhagen differ:
            We can engineer an experiment where an observer measures the spin of an electron, and if the electron is spin-up the observer will see it spinning up, and if the electron is spin-down the observer will see it spinning down. This is in close analogy to the previous experiment.
            Let us denote the state of the observer by |O_up> if he saw the electron spinning up, and |O_down> if he saw the electron spinning down.

            We run the same mathematics as before. If the electron starts in the state |1_up>, the system ends up in the state |1_up>|O_up>, etc.
            If the electron starts in the state |1_up> + |1_down>, the system ends up in the state |1_up>|O_up> + |1_down>|O_down>.
            At least in MWI it does. MWI says we do not treat the observer differently from any other quantum system, and so the situation is nearly exactly analogous to the previous experiment.
            The Copenhagen interpretation does not play ball at this point. It claims the state of the system is either |1_up>|O_up> or |1_down>|O_down> , and not in a superposition. The observer is treated as being somehow different from electron 2 from the previous experiment, in that the observer does not become entangled with electron 1. This is what we call “collapse of the wave function”, and Copenhagen does not define how it comes about that the wave function supposedly collapses in the second experiment but not the first. Note that this notion of collapse is an additional component which Copenhagen adds to the basic quantum mechanical framework which Copenhagen and MWI agree about.

            There is no difference in what we would expect the observer to experience, whether the final state is |1_up>|O_up> + |1_down>|O_down> or “either |1_up>|O_up> or |1_down>|O_down>“, so introducing this notion of collapse does not alter the predictions of the theory in any way. It merely exists so one will not have to think there exist a separate observer on each “branch” of the wave function.
            That is why I call wave function collapse a ZEVO. The fact that it is not well-defined, neither mathematically nor metaphysically, is just a further problem.

            In regards to the Born rule, I recall some paper deriving it, but I can’t find it at the moment. Supposedly it was based on this one though. Personally, what bothers me the most is trying to interpret probability amplitudes metaphysically within MWI, not so much the math of it.

          • migo says:

            This is a reply to Jonluw above.

            “The observer is treated as being somehow different from electron 2 from the previous experiment, in that the observer does not become entangled with electron 1.”

            I am very far from an expert on quantum physics, and I would be extremely grateful if someone pointed out the flaws in my reasoning below.

            There is a difference between the observer and the electron 2. The system composed of electrons 1 and 2 are a quantum system subject to quantum laws. The observer is part of the macroscopic world. When a system is not interacting with its surroundings, in enters “quantum mode” and its future state evolves probabilistically (evolution given by the wave function). When the system interacts with something outside it, its state assumes a subspace of the probability distribution describing it prior to the interaction, and there is an information transfer from the system to the outside (collapse). After the interaction, the system goes back to “quantum mode” and again its future state evolves probabilistically, but starting from the probability distribution subspace assumed during the interaction. In the quantum realm, systems are often isolated, not interacting with their surroundings. In the macroscopic world, everything is interacting all the time.

            edit: and what I mean by interaction is information transfer from the quantum system to the outside. An observation/realisation of a random variable (the quantum system), which reduces its probability space.

          • TheAncientGeeksTAG says:

            @Johnluw

            There is no doubt that superpositions exist: there is some doubt that they qualify as worlds.

            The first problem with the superposition based approach , is lack of objectivity. Whether a pure (as opposed to mixed) quantum of state qualifies as a superposition can depend on how an observer writes it down, by which I mean the observers choice of basis. If superposition can be made to disappear by an observer choosing the format in which to write observations, then it is not robustly objective. (A mixed as opposed to pure state does not have that problem.)

            The second problem is one of size. One would naturally tend to conceptualise a world as being about the size of the observable universe. But experimentally, complex coherent systems are difficult to maintain, and require extreme conditions, such as cooling to near absolute zero. These factors cast doubt on the ability of universe-sized superpositions to arise naturally.

            The third arises directly from being coherence based rather than incoherence based: coherence based “worlds” are not causally isolated, and continue to interact (strictly speaking, to interfere).

            The fourth is based on irreversability, or erasure.

          • TheAncientGeeksTAG says:

            ..cont.

            The fourth is based on irreversability, or erasure. Quantum erasure provides evidence for the reality of superpositions, but to interpret that as supporting many worlds means accepting that a state of affairs that can be “Thanos snapped” or rendered unreal counts as a world.

            (There is also a decoherence based version of many worlds, which is quite different from the coherence based version).

    • Jonluw says:

      The answer is empathically “yes”, because our two models are “the distribution of particle sizes is arbitrary”, vs. “the distribution of particle sizes follows a specific mathematical relationship”. This is something we can actually test. If we measure the sizes of some previously un-measured particles, they’ll either fall into this mathematical progression, or they won’t.

      Keep in mind that the scenario Scott proposed is slightly more difficult, because it supposes the sphere model does not predict the existence of new particles beyond the ones we have discovered.

      If we want to predict anything about the world, we can use either model, and we’d get exactly the same results — because Satan does absolutely no useful work. If you wanted to, you could add even more terms to the model, e.g. “Satan did not act alone, he had help from Loki”.
      […]
      So, is there anything about the many-worlds explanation that we can use to test it ? Does it do anything, or is it a useless layabout just like Satan ?

      In analogy to the sphere relation between particles, the many-worlds interpretation does not predict new phenomena for us to observe. It is simply a more parsimonious framework for understanding quantum mechanics than the Copenhagen interpretation is.
      The thing is, in Scott’s analogy to paleontology it feels most natural to liken MWI to the Satan hypothesis, because MWI is the “challenger” to the established view, but that would be turning the analogy on its head. MWI (or Dynamic collapse etc.) is in this case analogous to the “dinosaurs existed” hypothesis. The Copenhagen interpretation does not include a mathematically rigorous definition of “collapse of the wavefunction”, or a proposed physical mechanism. In the Copenhagen interpretation, wavefunction collapse is something which “just happens when quantum systems are observed” as if by magic. This collapse of the wavefunction is roughly analogous to Satan in the paleontology example.

      Theories like MWI aim at creating a more metaphysically tidy framework which can explain the observed evidence without appealing to “Satan”. The weakness of MWI, compared to, for instance, Dynamic collapse, is that it is hard to think of some way to test it. More or less the only way to “prove” MWI would be if all the alternative theories differ from Copenhagen in testable ways. In that case we could falsify every alternative until all we’re left with is a choice between MWI and Copenhagen. In which case I think the choice is clear, because Copenhagen is an incomplete theory.

      • the verbiage ecstatic says:

        Isn’t the MWI theory equally incomplete? My understanding is that it doesn’t give a rigorous explanation of what it means for the observer to move to another universe, or under what conditions that happens. Or is my physics out of date?

        My understanding is that there’s a phenomena we’ve observed, which is that when we interact with a quantum system we don’t see a superposition, we see one possibility or the other. We don’t have a mathematical description of when or why this happens. The Copenhagen interpretation claims that we stay the same and the wave function changes (“collapse”), whereas MWI claims that the wave function stays in superposition but we change (“ending up in one universe or another”), but they are equally mysterious explanations, and really the truth of it is there’s more science to be done. Physics experts, is this wrong?

        • Jonluw says:

          The Copenhagen interpretation claims that we stay the same and the wave function changes (“collapse”), whereas MWI claims that the wave function stays in superposition but we change (“ending up in one universe or another”)

          You’ve got Copenhagen right, but MWI does not claim we end up in “one universe or the other”. According to MWI “we” end up in both universes. That is to say, there is one universe in which you observed a dead cat, and one universe in which you observed a living cat. The “you” in either universe will feel equally like they are the same “you” who opened the box.

          The biggest problem with MWI is how to interpret probability amplitudes. Some papers have been published showing that one can derive the Born rule in MWI, but I do not know of a satisfying explanation of what probability amplitudes are, metaphysically.

          • dionisos says:

            That is to say, there is one universe in which you observed a dead cat, and one universe in which you observed a living cat. The “you” in either universe will feel equally like they are the same “you” who opened the box.

            And this is how the MWI is another good argument against the idea of a the persistent self and for some kind of empty individualism 🙂
            (I already think empty individualism is true for philosophical reasons, but with the MWI point of view it easier to defend)

          • real_human9000 says:

            “We” certainly do end up in different universes. The real_human that saw the living cat is different from the real_human that saw the dead. And forget the observer. What about the cat, or it’s children? They only exist in one branch of the wave function (or at least, the particular configuration of matter that we call the cat exists in only one branch).

            But the incompleteness of MWI is best illustrated without recourse to subjective experience. MWI has no answer to what is meant by saying that other branches of the wave function “exist” or are “equally as real”. In what sense do they exist – is there some definition of existence beyond physical presence in the universe?

          • kenny says:

            @real_human9000 There are several ‘yous’, each in a different ‘universe’.

            MWI has no answer to what is meant by saying that other branches of the wave function “exist” or are “equally as real”. In what sense do they exist – is there some definition of existence beyond physical presence in the universe?

            It does have an answer – quantum interference is evidence that ‘they’ exist just as much as ‘we’ do. They interact with our branch – a literal “physical presence”.

      • sclmlw says:

        In that case we could falsify every alternative until all we’re left with is a choice between MWI and Copenhagen. In which case I think the choice is clear, because Copenhagen is an incomplete theory.

        I still think there’s a difference between an untested hypothesis that is descriptive only, like MWI, and a tested theory independent of whether we’ve failed to identify and test a competing theory. If we’re talking strict Occam’s Razor about whether to prefer a hypothesis that requires the least assumptions, that’s one thing. But then we’re just talking about whether we’re creative enough to come up with good new hypotheses.

        Take this back to the time of geocentrism, and we have a situation where the prevailing hypothesis doesn’t predict new observations very well and has to keep getting updated with every measurement. In the absence of the Copernicans, we’d have to say we prefer geocentrism to an alternative of, what, “nothing”? Maybe instead we should create the equivalent of a ‘null hypothesis’ here and geocentrism has to compete with the hypothesis that “current explanations do not adequately predict new phenomena”.

        Indeed, the geocentrism case should make us wary of purely descriptive models, no matter how fancy the math used to describe them. I think this especially applies to the sphere example and string theory. Sometimes we think we know something about the universe because of mathematical suggestion, but only later discover that we had an incomplete understanding of mathematics all along. So extrapolating from what we know is dangerous if it’s not supported by testable hypotheses.

        Say you’re talking to a mathematics student who has learned the definition of an integral as “area under a curve”. You demonstrate some mathematical association with observed phenomena where the calculation is exactly the same as those he uses for integrals and he naturally assumes the observed phenomenon represents an area calculation of some kind. Clearly the mathematics suggests there’s an area present, whether it’s directly observed or not. He goes on to hypothesize higher-dimensional areas, wound up tight, or whatever, that cannot be observed but must be included in the model because “the math requires it to be so”. Later he’s learning about Taylor series and realizes that an integral can more generally be described as the limit of sums, and doesn’t always have a definite relationship to area. All that work imagining hypothetical areas was entirely unnecessary.

    • Max Chaplin says:

      I find the point about Satan similar to what I was going to say – since Satan makes the effort to falsify a natural process with self-consistent rules, the evolutionary paradigm of paleontology is still useful, except in this case it studies not the real world but the fiction that Satan builds. Our ideas about how EvolutionOS operates still hold true, even if it runs not natively but on SatanVM. This is kinda like how physicists deal with the simulation hypothesis – they don’t see it as rivaling the standard theories, just adding something on top of them.

      If we measure the sizes of some previously un-measured particles, they’ll either fall into this mathematical progression, or they won’t. Until we do so, we are free to use either model in our calculations.

      This is a problematic rule for a reason Scott has briefly mentioned:

      Suppose the scientist came up with their Extradimensional Sphere hypothesis after learning the masses of the relevant particles, and so it has not predicted anything.

      Conditioning the validity of a theory on predictive power can lead to a paradoxical situation where gathering data hinders our ability to identify the underlying rule behind it. If we carelessly measure all of the particles’ masses, our ability to create theories with predictive power about them will be gone forever.

      One could remedy the situation by giving some particle masses to a scientist who is unaware of the theory and then, if he discovers The Sphere himself and deduces the masses of other particles, take this as a successful prediction. But then, why won’t we isolate the process his brain uses to arrive at the model, call this process “science” and apply it to the complete set of data?

      • markus says:

        The remedy highlight how predictive power is a tool used to convince other scientist about whats true rather than a metaphysical key to truth.

    • naath says:

      The Many Worlds and Copenhagen interpretations are built on the same experimental data and make the same predictions. It is not impossible that some future experimentalist will be able to devise something that differentiates them but we are (afaik) not anywhere near.

      Neither is “more true” than the other, both are mathematical constructs built to explain experiment. The Copenhagen interpretation offers somewhat easier maths in most circumstances, but makes much less sense, and offers no real attempt to explain wtf is going on except some handwavey notion that the waveform ‘collapses’ … uh, because it does; if you wish to pick an interpretation based on ‘making sense’ then the Copenhagen interpretation … does not score well.

      My impression when I had to read a lot of the maths for my undergrad dissertation on Everett, is that most people in the field want their quantum computer to work and are happy to leave the philosophy to philosophers. If it ever become experimentally relevant then maybe it’ll come out of the realm of speculation.

    • Garrett says:

      “Believe in Satan if you want, I don’t care, here’s a formula that predicts where the next dinosaur bone will be”

      This is why I think mocking the idea of eg. young-earth creationists as geologists is silly. Even as an atheist, I care more about your ability to find oil cheaply rather than whether you think it’s because of a billion-years process or instead a satanic plot to tempt the faithful with a simulation of a billion-year process. The insistence on orthodoxy gets in the way of the useful. Can you find the oil/fossils/whatever?

    • So, is there anything about the many-worlds explanation that we can use to test it ? Does it do anything, or is it a useless layabout just like Satan ?

      Yes:

      https://en.wikipedia.org/wiki/Many-worlds_interpretation#Weak_coupling

      I’m not saying it will actually work, but theorists have come up with ways to test it, and I stopped reading once the original article after it became clear the author was not aware of this.

      • eyeballfrog says:

        So did anyone actually perform that experiment? The article is unclear on that. It’s also unclear how exactly that experiment shows the existence of other universes.

        • acymetric says:

          I am totally out of my wheelhouse here, but it sure sounds like the experiment being described requires you to take measurements in both parallel worlds, unless I am misunderstanding. That being the case, I would guess that this experiment hasn’t been performed nor will it be (I mean, I guess they could excite the ion but there would be no way to determine the results).

          It involves an isolated ion in an ion trap, a quantum measurement that would yield two parallel worlds (their difference just being in the detection of a single photon), and the excitation of the ion from only one of these worlds. If the excited ion can be detected from the other parallel universe, then this would constitute direct evidence in support of the many-worlds interpretation

          The part I bolded seems to be saying “if the other world can detect the excited ion, that would be evidence of other worlds”.

          “The first rule of tautology club is…”

      • imoimo says:

        [Physicist here] I hadn’t heard of the Plaga paper so thanks for bringing it up. However the proposed experiment seems weak for a few reasons.

        First, an article on Plaga from 1995 quotes him as saying the experiment could be “complete trash,” and that if not he expected it to be tried within a year or two. It wasn’t tried. (And I cant find more recent quotes from him.)

        Second, we’ve indirectly done Plaga’s experiment many times. Plaga’s experiment is designed to detect an abnormality when two measurements happen one after the other. Measurements happen all the time in physics and engineering labs, yet no one’s reported such an abnormality.

        Third, Scott Aaronson writes about MWI but doesn’t mention Plaga. He instead mentions Bouwmeester et al. who propose a macroscopic quantum experiment. The virtue of the experiment seems to be its large scale in number of particles, which Aaronson notes has technically never been probed. This isn’t a test of MWI per se, it’s just a test of QM. But if QM were shown to be different than we thought in this case, it could render MWI impossible (at least according to Aaronson).

        So in the sense of Bouwmeester et al. MWI might be testable, but it’s still inseparable from our current understanding of QM. You can’t confirm or rule out MWI without changing QM itself, and man good luck doing that.

        (To be clear, MWI still might be testable by testing QM, but it seems so unlikely that it’s worth assuming it’s not, as Scott Alexander does.)

    • Le Maistre Chat says:

      Does it do anything, or is it a useless layabout just like Satan ?

      I just want to say that I love the expression “The multiverse is a useless layabout just like Satan.”

    • Paul Torek says:

      This only works against a pseudo-Satan hypothesis that philosophers would make up. Real flesh and blood people who sincerely advocate a Satanic hypothesis don’t make the same predictions that paleontologists do. You may have heard of this related hypothesis called Armageddon?

      Armageddon out of here.

  3. Sniffnoy says:

    Another instructive example (which I am blatantly stealing from Eliezer Yudkowsky, although AFAIK he might have stolen it from someone else too) is: Do things cease to exist when it is no longer possible for them to have any effect on us?

    Under our current understanding of cosmology, it is possible for the expansion of the universe to carry things so far away that it is impossible for them to have any effect on us ever again; even if they were to travel towards us at lightspeed, they’d never reach us. (Unless the universe were to start contracting, I guess, which it seems is not going to happen.)

    So, it would be entirely consistent with all observations then to suppose that such objects simply cease to exist once this occurs. But, this is stupid. Parsimony demands that in fact such objects continue to exist, and continue to obey the same laws of physics, even after they can no longer affect us.

    (Imagine trying to write down a model for the universe in which this is not the case! In fact this is point is made even stronger by the fact that the universe doesn’t seem to be made of “objects” in the intuitive sense at all, but rather fields that extend throughout the entire universe; but we can ignore that — the point remains the same without that.)

    The reasoning that we use to infer the existence of such objects is the same reasoning that can be used to infer various other things which can’t directly affect us, but let us write down a parsimonious model rather than a hacked-together one.

    • Robert Jones says:

      The existence of objects beyond the cosmological event horizon isn’t obviously parsimonious. On the usual cosmological model (although of course we have no way of knowing whether this holds true beyond the event horizon) this requires us to hypothesise an infinite amount of stuff which is unknowable and has no effect on us.

      I think preceived non-parsimony in thinking that objects cease to exist when they cross the event horizon is in imagining that some additional metaphysical apparatus is required to delete the objects, as if they were being vaporised by some forcefield at the event horizon. If we instead think that the objects have simply passed out of the region in which things exist, then it becomes clear that we’re not saying that anything mysterious has happened.

      • ADifferentAnonymous says:

        Doesn’t this violate the Copernican Principle pretty hard?

        • Robert Jones says:

          I’m not sure how wedded I am to the Copernican Principle, but no. Every point in the universe has a cosmological event horizon, beyond which nothing exists. You’re thinking that {things which exist} should be observer independent, but there’s no reason to require this. If my twin crosses the event horizon, then from his point of view, it is I who have ceased to exist.

          • Viliam says:

            Similar reasoning could apply to the branches in quantum superposition. The more different they get (the more particles get entangled), the less they exist from each other’s perspective.

            Okay, this is a stronger assumption, because existence is not only relative, but also gradual. On the other hand, if the relative existence can only decrease (you can’t disentangle an entangled particle, I think), it seems possible.

      • Nick says:

        “The region in which things exist” sounds pretty mysterious to me. Why would we suppose existence has boundaries?

        • Robert Jones says:

          That seems an odd way to look at it. We don’t start off with suppositions about the extent of existence. We just make observations and note things which exist, all of which by definition lie within the event horizon. Surely the mysterious thing would be positing the existence of unobservable things?

          • sclmlw says:

            The hypothesis “things can pass outside the field of existence” is a positive claim. It bears the burden of proof against the null hypothesis that does not require the introduction of a distinct spacial boundary for the field of existence or the possibility that some things that previously existed can become non-existent. The claim must be tested experimentally before any credence should be given to it.

          • Nick says:

            What @sclmlw said, basically.

      • Dacyn says:

        Another question (also stolen from EY) makes things clearer: what if you have to make a decision based on whether or not things outside the horizon “exist”? For example, what if you are considering funding a colonization mission? Does it make sense to say “well once they cross the horizon they will have left the realm of existence, so it doesn’t matter whether or not I supply them with enough food to survive once they get there”? Most people would say that is crazy.

        (Incidentally, this is part of the reason I don’t hold the Many-Worlds interpretation — there really isn’t any possible decision that you could make whose quality would depend on whether many worlds exist or not. So in that case it does seem more intuitive to say that there is a “realm of existence” which is bounded in some sense.)

        • hls2003 says:

          For example, what if you are considering funding a colonization mission? Does it make sense to say “well once they cross the horizon they will have left the realm of existence, so it doesn’t matter whether or not I supply them with enough food to survive once they get there”?

          I don’t particularly disagree with the overall point, but this example doesn’t really work. There could be no “colonization mission” because the “horizon” of receding space is, by definition, outside of any conceivable ability to reach. Infinite food and infinite time would not suffice to get a colony ship to a part of the universe which has receded past the horizon of observability, because those locations are receding faster than light.

          • Matthew Green says:

            Isn’t it true that the Universe’s rate of expansion is increasing?

          • nadbor says:

            I don’t think the example is about a planet that is currently beyond the horizon. I think it is about a planet that can still be reached by a ship if we send it now but which will become unreachable by the time this ship gets there.

          • Dacyn says:

            What Nadbor said. The crucial point is that the colony won’t be able to send a message back to us.

        • Robert Jones says:

          From the point of view of the colonisers, of course they would like to have food. They wouldn’t observe anything special to occur on crossing my event horizon (save perhaps for my ceasing to exist, if that was something that concerned them). But from my point of view, there’s no reason for me to care, and there’s no reason why I would consider funding the mission in the first place. What happens to them after they cross the event horizon cannot even in theory have any effect on me. They may as well have crossed into Valinor. You can think of them as existing if you like, but they are (in Faza’s phrase) ZEVOs. They are metaphysically unnecessary and including them has no effect on your predictive power.

          • nadbor says:

            Some people would still care about the lives of the colonists even if the can’t have any causal influence on their own lives. That is not in practice any different from caring about needy people in a foreign country. When you donate money to help a starving child in a 3rd world country you may as well be sending that money beyond an event horizon for all the good that will do you.

    • meh says:

      isn’t there like an entire chapter in the sequences about this? angels dancing on electrons or something?

      i am surprised that chapter wasn’t the starting point for this post

  4. lightrook says:

    But we don’t even need to go that far to note that subjective reason is compatible with science, because subjective reasoning is *prior* to empirical reasoning: you cannot do science until you have taken a leap of faith and accepted induction over anti-inductuon.

    • Bugmaster says:

      That’s a bit of a slippery-slope fallacy, IMO. Yes, there is probably no good way to empirically test induction vs. anti-induction; just like there’s no good empirical reason to reject hard solipsism. However, once you accept on faith that reality is probably a thing, and that it probably follows some set of reasonably consistent laws; it does not follow that you are justified in accepting absolutely everything else on faith, as well.

    • Peter says:

      Leap of faith? We’re born on that side, no leaping required. It requires a lot of effort to get ourselves into the position of Cartesian doubt that might even make such a leap of faith appear like an option.

      There are other ways back, of course, even ones that don’t require leaping. One is to think around the problem a bit until you make a mistake, and hey presto, a bridge back to the comfortable familiar world. This seems to have been Descarte’s approach, although he wouldn’t have described it as such. The second is to go and play a game of backgammon or something, and when you’re done, note how absurd it all seems and get back on with your life. This was Hume’s approach to a position of radical doubt, and it seems to have worked quite nicely.

      There are of course the Phyrronists, who had the neat trick of leaving their judgement suspended on some level while still getting on with their lives on another level. Apparently this is meant to bring tranquility, and there are interesting similarities with Buddhism – and evidence that they may have been influenced by this.

      Of course the Phyrronist approach raises the question of which of these levels science is meant to be working on, and whether those levels can (or should?) be further subdivided.

      I was about to say you can get people who accept Cartesian-style doubt “on some level”, but you don’t encounter people who run their life on it, who accept it on all levels. Then I thought – how would we know? There’s a lot that goes into being able and willing to use words, someone who lost that… would be basically uncommunicative, a good candidate for being a psychiatric inpatient, and would be indistinguishable from all of the other ones.

  5. Ketil says:

    I think the correct response is to say that both theories explain the data, and one cannot empirically test which theory is true, but the paleontology theory is more elegant

    It is true that we cannot by empirical means distinguish paleontology from an all (or nearly so) powerful adversarial supernatural (or alien, or whatever) entity trying to fool us about the actual nature of things. What makes paleontology more “elegant”, is that it is a theory that explains exactly this phenomenon. The Satan explanation, on the other hand, is a universal theory, with equal ease it explains away any other observation or data you might have. Earth is the center of the universe, but the Great Deceiver is just messing with physics to trick scientists into sin. Plate tectonics? An illusion from Satan, forfeiting your soul if you accept it. The sky looks blue to you? Satan messing with your perception to tempt you away from God.

    I think this (whatever it’s called, argument from universality?) is different from lack of falsifiability (although you get that, too, by upping Satan’s power and making more assumptions about his motivation), and from Occam’s razor.

    • Murphy says:

      See also “last tuesdayism”

      The scientists are wrong, the universe isn’t 14 billion years old.

      The young earth creationists are also wrong. the universe wasn’t created 6000 years ago.

      Everything and everyone was in fact created last tuesday.

      Any memories you have from before last tuesday? fake.

      Every ball in flight, every photon en-route from alpha centuri, all created last tuesday.

  6. knzhou says:

    As a physicist, I’m happy with this post — as always, it illuminates with well-chosen metaphors.

    However, I think there are distinct notions of “non-empirical” in fundamental physics that are getting lumped together, which it might be useful to distinguish.

    The first involves problems of interpretation. These involve fixing a mathematical theory (which is consistent with all experimental data) and then just arguing over the right way to talk about it, e.g. whether various objects in the theory are “real”, or “primary”, or “subjective” vs. “objective” (in some contexts, “epistemic” vs. “ontic”). It happens most often for quantum mechanics and general relativity, e.g. whether “many worlds exist”.

    One can say that these debates are not scientific because they are completely independent of observations. But they have scientific value because they let us explore new ways of thinking about a working theory, which may help us someday move toward a deeper one. For example, suppose you don’t like many worlds but you also don’t like theories with collapse on measurement. How weird does your interpretation have to be in order to avoid both? What comfortable philosophical features do you have to give up? Without talking about interpretations, we wouldn’t know. From the Bayesian perspective these debates help us think about meta-theory selection, which is useful even if they don’t cause standard Bayesian updates.

    The second involves postulating unobservable objects. Often, you have to introduce objects into theories that can’t be directly probed, like the gauge potential or the inside of a black hole or the outside of the cosmological horizon, and this can appear unscientific. I’m not bothered by this for two reasons. First, what’s unobservable today can easily become observable tomorrow — people once thought _atoms_ were unscientific because the number of individual atoms kept cancelling out in the final results, making them look suspiciously like a fictitious calculational device. Einstein won the Nobel prize for finding a situation where the number did matter, Brownian motion. (I once read an old philosophy paper that claimed that the idea of consciousness being someday explained by science was so absurd, that it would be like claiming science could someday explain why bread nourished people and rocks didn’t.) Second, there’s just nothing wrong with having unobservable intermediate quantities — what matters for theory evaluation is that the theory as a whole matches experimental results with high likelihood and low complexity. If you have to do violence to the theory to excise the unobservable stuff, making it more complicated and harder to reason about in the process, then to me nothing is gained.

    The third involves likelihood calculations where the underlying phenomenon is one-shot. For example, if I ask you right now to initialize the fundamental constants of the Standard Model randomly, under any probability distribution you like, then the derived mass of the Higgs boson will almost certainly come out billions of times too big. This will happen with almost certainty unless you’re aware of this game, and cook up some very complicated probability distribution made up for the sole purpose of making it small. So people think there’s a problem, but others counter that this is not science because you can’t “resample the fundamental constants”, so probability has no meaning. Again, to me this objection is demolished by Bayesian reasoning, which allows for theory evaluation on one-shot phenomena. Another objection is that there isn’t any objective notion of what makes a probability distribution or more generally a theory “simple”, which has the exact response you point out — partially subjective theory selection has been good enough for humanity for the whole history of science, and it had to have been because it’s all any of us have.

    The fourth involves the anthropic principle, but that’s a can of worms I’m neither qualified nor inclined to open. Thinking about anthropic selection effects is a hall of mirrors, anyone can get seriously lost.

    • markus says:

      Fantastic example with consciousness, bread and stones.

    • Akhorahil says:

      Good post, just wanted to correct that what Einstein was singled out for for his Nobel prize was the photoelectric effect, not Brownian motion.

      • knzhou says:

        Agh, you’re right, thanks. I got it mixed up because I just remember that what he got it for wasn’t relativity.

    • JPNunez says:

      How old was the philosophy paper anyway?

      I mean by the 1800s we had some idea of whether different kinds of foods had different nutrients.

      • 90% probability that “philosophy paper” does not exist. Most likely OP read someone making a comparison with some other comparison etc. and the net effect is that the story of that philosophy paper is simply made up. If there were a time when people thought the nourishing effect of bread was unexplainable, they probably did not use words like “science” and “consciousness.”

        • (Or it could be a parody, by someone like David Hume, not a real position.)

        • knzhou says:

          I have misremembered the context (it wasn’t related to qualia), but the anecdote is correct. This is Hume talking sincerely (section IV, An Enquiry Concerning Human Understanding), when introducing the problem of induction:

          It must certainly be allowed, that nature has kept us at a great distance from all her secrets, and has afforded us only the knowledge of a few superficial qualities of objects; while she conceals from us those powers and principles on which the influence of those objects entirely depends. Our senses inform us of the colour, weight, and consistence of bread; but neither sense nor reason can ever inform us of those qualities which fit it for the nourishment and support of a human body. Sight or feeling conveys an idea of the actual motion of bodies; but as to that wonderful force or power, which would carry on a moving body for ever in a continued change of place, and which bodies never lose but by communicating it to others; of this we cannot form the most distant conception. But notwithstanding this ignorance of natural powers[1] and principles, we always presume, when we see like sensible qualities, that they have like secret powers, and expect that effects, similar to those which we have experienced, will follow from them. If a body of like colour and consistence with that bread, which we have formerly eat, be presented to us, we make no scruple of repeating the experiment, and foresee, with certainty, like nourishment and support.

          The point still has some force, but the specific example he used has lost almost all of it with time.

    • Reasoner says:

      Thanks for weighing in.

    • StellaAthena says:

      Excellent comment! It would be nice if you could give a list of where different common untestable physics ideas fall. I would offer to, but I’m quite unqualified for the task. I can however tackle the antrophic principle. There are two forms, the strong form which I’ll call SAP and the weak form which I’ll call WAP.

      As you elude to, there seems to be a bizarre amount of fine tuning to the universe. The Standard Model of particle physics has 20 or so parameters that appear to be (theoretically) able to be set to any combination of values. However for the vast majority of the values the universe sorta just… doesn’t happen. Wikipedia has a good image for this. The plot on the left spans what can roughly be considered “all values” while the one on the right is zoomed in on the interesting region. Both plots are on a logarithmic scale. The tiny green triangle where the arrow is pointing is the only region suitable for developing anything like our universe. Crossing the orange line prevents stars from existing. Crossing the red line causes stable chemical compounds to cease to exist. Moving further out causes even more drastic changes, such as preventing atoms from forming at all or causing protons to undergo radioactive decay.

      This plot only looks at two of the 25 (per wikipedia) parameters. A thorough treatment would involve 299 more plots like this one, with the requirement that all the green sections intersect. And that’s just to produce a universe that’s chemically and radiologically stable enough to produce large objects that exist for billions of years, we haven’t even touched conditions necessary for life as we know it to occur.

      People looked at this an (understandably) went “what the actual fuck.” The AP is an attempt to reason with the astronomical unlikelihood of the universe. There are a couple versions depending on who you read, but here’s a rough gloss. The SAP says that the universe must necessarily produce conscious life. There are a couple attempts at justifying this, including creators who deliberately produced like (god, simulation programmers) and some QM interpretations (including some, but not all, MWI variants). The WAP takes a much gentler approach and claims merely that by nature of us existing the universe must have been such that we existed. This approach is closely connectioned with the one-off experiments, and points out that survivorship bias is a necessary part of any observations we make.

      Bostrom has a closely related principle, which says (in his own words) “[e]ach observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class.” This principle is the most interesting version to me, because it does allow us to make testable (in theory) predictions. For example, if you arrange every human who will ever live in order of birth, 90% of people fall in the middle 90% of the list. This means that I can be pretty sure that I am neither one of the last 5% of people, nor one of the first 5% of people. This allows me to estimate the number of people who will ever live if I can estimate the number of people who have lived so far. If 100 billion people have ever lived (approximately current estimates) then we can be pretty sure there will be no less than 105 B more people born and no more than 1.9 trillion people born. Note that it’s important that we are specifically quantifying over people and not anything else, such as years. At current growth rates, the 1.9T people gives humanity less than 1000 years.

  7. LPM says:

    If you include untestable causes such as Satan (or the Higgs boson, previously) in a model, you would need the model to perform much better than models without Satan for it to plausible. If the performance is much better than all alternatives then it should be preferred (not always true in reality – see Galileo). The problem with adding Satan is that there are too many alternative models with a more concentrated plausibility.

  8. daneelssoul says:

    “A choice between purely metaphysical constructs, without reference to empirical evidence” – This actually has interesting philosophical implications. Like if you have two different theories that predict *exactly* the same empirical observations is there a meaningful difference between them?

    I mean Satan and the dinosaurs I guess are different, but part of that is that the existence of Satan *should* come with a lot of other (theoretically) testable predictions. If “Satan” were really just some force of nature that simulated the existence of dinosaurs to high fidelity and left exactly the traces that they would have left, is this really meaningfully different from dinosaurs actually existing?

    Or maybe a more convincing example: the Lagrangian formulation of physics. This is a theory that is mathematically equivalent to the classical formulation of physics (meaning that it makes exactly the same empirical predictions). Is it even meaningful to talk about which of the two formulations are “true”? However, I claim that despite the choice here being between “purely metaphysical constructs, without reference to empirical evidence”, there are some problems that you would rather solve using one formulation rather than the other.

    Just because two theories are mathematically equivalent does not mean that you shouldn’t favor one theory over the other.

  9. ImprovedEnvironment says:

    I think the answer is clearly yes. But consider what this commits us to. Suppose the scientist came up with their Extradimensional Sphere hypothesis after learning the masses of the relevant particles, and so it has not predicted anything. Suppose the extradimensional sphere is outside normal space, curled up into some dimension we can’t possibly access or test without a particle accelerator the size of the moon. Suppose there are no undiscovered particles in this set that can be tested to see if they also reflect sphere-related parameters. This theory is exactly the kind of postempirical, metaphysical construct that the Aeon article savages.

    As a counterpoint to this thought experiment:

    Assume humans come up with the Extradimensional Sphere hypothesis after discovering all fundamental particles. A great debate over whether the Extradimensional Sphere hypothesis is “real science” or “postempirical metaphysics” occurs over the next five years.

    Then, humans come into contact with a group of extraterrestrial aliens. While trading information with the aliens, humans discover scientific progress went different for the aliens – the Extradimensional Sphere hypothesis was proposed before most fundamental particles were discovered, and as a result the hypothesis created several testable predictions. Thus, the aliens widely accept the Extradimensional Sphere hypothesis is “real science”.

    Should the humans revise their opinion of whether the Extradimensional Sphere hypothesis is “real science” based on the alien history? Should the aliens revise their opinion of whether the Extradimensional Sphere hypothesis is “real science” based on human history.

    ===============================

    I think the resolution of points raised in this post are simple: It’s possible to do post-hoc Bayesian analysis. You can test a hypothesis using the data which suggested the hypothesis. It’s just so difficult and hard to do without accidentally confirmation-biasing your way to false conclusions that you should almost never attempt it. This is particularly true when the alternative to “attempt incredibly difficult post-hoc Bayesian analysis” is the much easier “gather more evidence after creating your theory”. Some heuristics, such as Occam’s razor, work by approximating post-hoc Bayesian analysis. This is similar to how other heuristics, such as representativeness heuristic, work by approximating regular Bayesian analysis.

    • Murphy says:

      That’s a good point.

      Further, imagine you meet 3 alien species.

      They’ve all grappled with the same problem and all 3 created a model that predicted future discoveries/properties of yet to be discovered particles before discovering them.

      And all the models match the data.

      They each consider their model the simplest by their idea of simple because one alien species has a brain very good at dealing with elasticity, one intuatively understands fluid dynamics, one sees 4D toroids as simple on the level of kids playsets.

      They’re all different models with a different set of implications about the structure of space-time that are unfortunately…. unfalsifiable.

      And none of them are your sphere model.

      Do you discard your sphere model? Do you adopt one of the alien models? If so which one?

  10. gkai says:

    The classic explanation wins against “Satan did it to confuse” because it provides a interpretative pattern that help looking for confirming or infirming facts, i.e. it gives a way to interpret the world broader than just the fossils that were used to construct the theory. That’s the mark of any good theory, it can make predictions outside the collection of stuff used to build it. Then the theory is scientific as soon as you can test those new predictions….

    “Satan did it to fool us” is not a good paleontology theory: to make it equivalent in predictive power to the classic theory, you wrote this:

    Since Satan is trying to fool us into believing the modern paleontology paradigm, he’ll hide the fossils in ways that conform to its predictions, so we will predict plesiosaur fossils will only be found at sea – otherwise the gig would be up!

    This is not predictive in itself, you have to include the classic theory inside the satan hypothesis, as only knowing the classic theory could you predict what satan will do to fool you!

    Satan theory is thus not an alternative, it is a superset of classic paleontology and to have any merit, should predict things outside paleontology.

    The only thing I could think of is “As Satan goal is to make God unbelievable, he should plant as many false hint as possible to make a full, purely scientific, explanation of the world possible and easy”. It doesn’t look like he did a very good job (QM? relativity? Satan took some vacation apparently, hence the end of classic Newtonian physics), and any way this do not seems like a particularly nefarious goal: as a side effect it provide human kind with larger and larger control over material world, something in line with past (if not current) Christian doctrine…

    BTW, your sphere example make testable predictions: either a missing particle with a specific mass linked to another sphere property, or, if all properties have been used, no more new particle to find.

    Maybe the sphere properties will becomes more and more convoluted as new particles get discovered, and at one point people will start to find the explanatory power/complexity ratio of the superfundamental sphere really poor and they will look for a replacement, or drop it altogether. This is similar to what actually happened with epicycles for planet motion…

    • Akhorahil says:

      I would like to propose Meta-Satanic Paleontology: It is in fact all true, but Satan made you believe that he planted all the fossils.

    • FormerRanger says:

      As has been pointed out, bringing Satan into the hypothesis either greatly broadens the interpretation, requiring Satan to exist (what’s the evidence?) and either give away the game by being sloppy and detectably existing (being summoned by Faust?) or only doing paleontology and thereby limiting his scope to a degree that makes him completely not “Satan.” You can’t have it both ways.

      Of course, some Creationists believe that God created the “fake-but-accurate” fossils, which is a whole different kettle of fish dinosaurs.

  11. encharitimone says:

    Regarding the “subjectivity” of Bayesian methods: they’re as subjective or objective as A) the basis for your priors and B) the calculation of your updates.

    If your prior is that “the many-worlds hypothesis sounds like hippy hokum” with a certainty of 60%, and you hear a physicist explain it in a way that “sounds more reasonable”, and consequently update that view to 30%, then congratulations, your “Bayesian” reasoning is utterly subjective.

    Obviously, in reality it’s rarely that obvious. For example, I suspect that most people’s priors on the existence of some type of “God” (regardless of whether that initial prior is high or low), are a lot more subjective than we’d like to admit.

  12. Jack V says:

    Yeah, that’s roughly what I’d say.

    An example I often use is, “If space is expanding, then light or any effect from sufficiently distant points will never reach earth or vice versa. Is the simpler theory that physics works just the same there, or a different way, or that nothing exists?”

    That seems analogous to the many worlds example, because you don’t NEED to postulate that things work the same — if physics was completely different, we wouldn’t see the effects. But the theory “like this everywhere” seems simpler than “like this everywhere in this sphere, but different elsewhere”. Either works, but one is better. So even though we can’t KNOW, I know what I think it’s like there.

    That avoids the “needing to embed the entire theory in the alternative hypothesis” problem with the Satan example — actual theories and Satan deceiving us don’t postulate that he’d made up a perfect theory of evolution, only enough to justify evidence already found…

    I’ll also note, listening to physicists like Aaronson replying to Yudkowsky’s explanation of many worlds, it doesn’t sound like the “not many worlds just doesn’t make sense” argument is as slam-dunk obvious as Yudowsky made it sound: there’s something about how state evolves across worlds the analogy doesn’t cover but I don’t understand. But I have noticed, no-one ever points to any other problem with many worlds, even when people disagree, it’s “we don’t need it, collapse is fine” not “it wouldn’t work”.

    For instance, IIRC, Aaronson said he thought it was still possible there was a size beyond which superposition just didn’t happen for some physical not engineering reason. Given the little I know, that didn’t make sense, but… he’d know. If so, many worlds would actually be wrong, I think. But assuming not, then what we both said about it stands.

  13. nameless1 says:

    > It transforms our worldview less.

    This is a terrible argument, Scott. It transforms YOUR worldview less. The Satan theory transforms a Bible-believing Young Earth Creationist Christians worldview less. And which one would transform Sir Isaac Newtons worldview less?

    It transforms a worldview less, that worldview that is not only scientific, but more or less explicitly atheist or at least deist, one that does not expect supernatural meddling in things. Sir Isaac Newton would not accept that worldview.

    You are effectively saying it is more elegant because we are already atheists and it is more in line with it. What kind of science would be that? Every Christian would say, correctly, that look, Scott is admitting that they are just doing atheist propaganda or emitting a smug sense of atheist superiority, nothing more.

    You should do far better if you would point out a long-running Christian tradition that tends to say that neither God nor Satan nor anyone else tends to mess with our perceptions and reasoning about the natural world, that miracles are pretty rare and every mainstream Catholic or similar theologian would laugh his ass off the idea of Satan filling the world with fake items to “test our faith”, that only very very fringe Christian theologians can think so, that it is not on line with Aquinas or anyone of that type said about how the universe works and how our perception of it works, who had a far more robust *faith* in that what we see in Nature and how we explain that through natural reasoning tends to be pretty good because Good created a rational, intelligible universe without such traps, that this is the kind of theological, philosophical thinking that created our whole tradition of science and so on… that it transforms THAT worldview less.

    • Jonluw says:

      Bugmaster clarified the argument in a good way above:

      Our two competing theories are not merely “long-dead ecosystem of actual dinosaurs” vs. “Satan tricked us”, but “long-dead ecosystem of actual dinosaurs” vs “Satan tricked us in a very specific way that looks exactly like a long-dead ecosystem of actual dinosaurs“

      In other words, the dead ecosystem is a better theory, not because it alters a previous worldview less, but because the alternative theory introduces another factor (Satan) without improving on its explanatory power.

      • nameless1 says:

        Yes, that is a very good point. What I tried to point out is that different groups have different worldviews and just saying it transforms OUR worldview less is not going to be accepted by groups who have different worldviews. They may see it as kinda arrogant.

        What I like about Bugmaster’s version is that is that is from a neutral viewpoint. That is, if Group B has has a different worldview in which Satan exists, does stuff and the world is 6000 years old because the Bible says so, then while “Satan tricked us in a very specific way that looks exactly like a long-dead ecosystem of actual dinosaurs“ is actually more compatible with their worldview, they have to accept that it has no explanatory power over ““long-dead ecosystem of actual dinosaurs”. So it has compatibility going for it, but not explanatory power.

        And because of that it can be reversed. That is, “us” from Scott’s perspective, physicalist atheists, for example do not find parapsychology compatible with their worldview despite studies in the Lancet and all that. But it is just sort of their problem just like Young Earth Creationists not finding dinos compatible with their worldview is just sort of their problem. The question is, what explanation of parapsychologists being able to produce Lancet-quality studies studies has the best explanatory power?

  14. Akhorahil says:

    I have long been a fan of the Lakatos idea that what matters is whether a research programme is productive or not. I can’t really tell whether string theory is proper science or not – I’m leaning towards that it is, but I’m not committing to that. But it certainly doesn’t seem to be productive. All this time, and what do we have that we didn’t have before? Do we have any new predictions or explanatory powers? If not, that doesn’t necessarily mean that it’s non-scientific or false (only truly degenerate research programmes deserve to be called pseudo-science), just that we should give up on the barren project. And of course, it’s often hard to tell whether a research programme is going to be productive when we first set out on it.

    Paleontology is productive in that it makes discoveries and creates explanation, but Satanic Paleontology isn’t. Are multiverse theories productive? It doesn’t seem so, although some types of multiverse theories at least make predictions about how we might detect other universes if they exist. Are we living in a simulation? Not sure, can you make any productive research about it (just as with the multiverses, some people have ideas for ways in which we might be able to detect it if it’s indeed the case) or make any new explanations that are consistent with observations?

    (By the way, this is also the reason we shouldn’t even bother with research about whether homeopathy works any longer – not only do we know that it makes no sense and would have to tear our entire system of physics apart in order to work, it has also been studied and shown not to work in hundreds of studies. Enough already, it’s a pure waste of resources!)

    • Akhorahil says:

      Fun fact: Lakatos and Feyerabend were going to make an adverserial collaboration.

      Wikipedia: “Lakatos and Feyerabend planned to produce a joint work in which Lakatos would develop a rationalist description of science and Feyerabend would attack it. The correspondence between Lakatos and Feyerabend, where the two discussed the project, has since been reproduced, with commentary, by Matteo Motterlini.”

    • Doesntliketocomment says:

      Thanks for making a probably better version of the objection I was going to make. Do you want to learn about dinosaurs? Then you should believe in paleontology. Want to learn about Satan? Then you should explode the “Satan made fossils” argument, starting by looking at the fossils. In this case the solutions are asymmetric, in that you will learn a lot about dinosaurs but little about Satan.

      In a similar way Copernican theory allowed for additional information to be elucidated about the planets, though not right away. The gap between Copernicus’ writing and acceptance should be a model for us, as until other areas of science and theory caught up it did not appear to have predictive power, thus from our perspective it seems to have languished for many years. In those years however it served an important function, existing as a possibility to spur further thinking.

    • Paul Brinkley says:

      This is similar to another glib aphorism I like to say about truth, particularly truth that is uncovered using science:

      It’s truth if you can make money off of it.

      This is very oversimplified from what I really mean, of course, sort of like how Gresham’s Law simplifies to “bad money drives out good”.

      The idea is to capture the spirit behind various pursuits that employed science to make our lives better, such as irrigation, refrigeration, transistors, and the Bessemer process. One may continue to insist that all of these wonders are illusions created by some nefarious higher power to mess with us, but at the end of the day, it’s still hard to argue with a laser.

      It’s hard to extend this to truths as abstruse as those in paleontology, though. Same goes for a lot of scientific pursuits, such as number theory or string theory. Who cares if there are eleven dimensions, seven unreachable, or if there are infinitely many twin primes, or if there were once stegosaurs? The only way these make money – are productive – is by someone arbitrarily declaring they’ll pay people to come up with them, or for entertainment.

      And yet, some truths in number theory became extremely valuable for protecting information while still being able to send it. Someday, someone might be able to reach other dimensions consistently, and do something useful like store bacon in them. Or someone will use the fact that there were stegosaurs to figure out how they came to be, and then figure out how to recreate them, and find something interesting in their DNA that reveals a cheap means of de-aging. (Or of producing more bacon.)

      So, a claim is true enough if we can make money off it, which in turn justifies the means we used to confirm that claim. Even a false claim is useful, by pruning the places we have to look. The most useful false claims are those that prune the most places for the least effort. The process that shows which areas are the most worth our effort, is science.

  15. Protagoras says:

    The Satan model, like may toy examples, is only untestable in kind of a special way. For a long time, people treated theology essentially like science, and attempted to understand the world on the basis of the activity of various powerful non-human agents, gods and demons and angels. And those theories work quite badly; they make highly inaccurate predictions. They don’t enable any useful technologies. Attempts to provide a coherent account of the psychology and sociology of these hypothetical agents fit poorly with our understanding of the psychology and sociology of humans, casting doubt on any effort to understand them as agents. Now, in principle it is always possible to add epicycles and ad hoc assumptions to deal with any problems, so of course you can say that whatever the failures of existing theologies it must be possible in principle to construct a theology which makes detailed predictions exactly matching our observations. But I think it’s relevant that nobody has actually done so; mostly people have given up on trying to patch theology, and those who continue to work on it continue to produce only more failure, in contrast to the successes of rival approaches. And I don’t see any of this stuff at work in the physics examples, and people like Sabine Hossenfelder argue that the recent attempts to judge theories of basic physics on the basis of “elegance” actually have a rather poor track record. So I’m not sure the analogy between naturalist interpretations of paleontology and the many worlds version of quantum theory holds up under close examination.

    • Charlie Lima says:

      Have you actually read any theologians? I can think of precious few who devoted their work to demonology or angelology. Even those specifically known for it, like Alphonso de Spina or Johann Meyer, typically devoted the bulk of their work to other topics and quite lengthy dissertations about most things were not of demonic or angelic origin.

      Aquinas, for instance, spent more time talking about how lending at interest increases inequality and how the value of goods are determined by their usefulness, than he ever spent on demons. Which theologians, exactly, were spending all this effort on explaining life on the basis of demonology and angelology?

      • Protagoras says:

        I said gods and demons and angels. It is of course true, but hardly to any purpose, to note that Christian theologians tended to talk more about God than about demons and angels. But to the extent that you are correct that even theologians have had a preference for non-divine explanations, I can think of nothing that could more dramatically illustrate the failure of the theological project.

        • Jaskologist says:

          You know how sometimes people dismiss philosophy as useless because it can’t figure anything out, as opposed to science? And then you or I get annoyed and point out that science is just a term for a bunch of things in the philosophy field that figured stuff out? That’s what you’re doing here.

          “But to the extent that even philosophers have had a preference for scientific explanations, I can think of nothing that could more dramatically illustrate the failure of the philosophical project.”

          • The original Mr. X says:

            Why stop at just philosophy? Historians talk about the need to read old books, but when you’re feeling sick, do you go to the doctor or do you crack open your copy of Tacitus? Checkmate, theists atheists historians!

          • Garrett says:

            crack open your copy of Tacitus

            I have a copy of a medical textbook which is almost 100 years old at this point. It’s fascinating in the “huh – they thought that and it was totally wrong” kind of way. But also horrific in some of the treatments they prescribed. Anything that suggests using a mixture with cadmium in it gets an immediate “nope” from me.

          • The original Mr. X says:

            I have a copy of a medical textbook which is almost 100 years old at this point. It’s fascinating in the “huh – they thought that and it was totally wrong” kind of way. But also horrific in some of the treatments they prescribed. Anything that suggests using a mixture with cadmium in it gets an immediate “nope” from me.

            Yes, but the difference is that the medical textbook is about the same thing as its more up-to-date counterparts, meaning we can meaningfully compare them to see which is more accurate. OTOH, science and history, or science and theology, are studying different things, and it’s a category error to treat theology or history as if they were defective versions of science, or vice versa.

        • The original Mr. X says:

          Actually I think it’s evidence that ancient and medieval thinkers were generally smart enough to realise that not every intellectual discipline falls into the categories of either “SCIENCE!!!1!! [tm]” or “failed attempts at SCIENCE!!!1! [tm]”.

          • Protagoras says:

            I don’t know what “SCIENCE!!!1!! [tm]” is, but for the most part ancient and medieval thinkers seem to me to have had a more rather than a less unified view of the range of intellectual disciplines as a whole.

          • The original Mr. X says:

            Ancient and medieval thinkers were quite capable of recognising that natural phenomena differed from supernatural, that techniques relevant to one did not necessarily work for techniques relevant to the other, and that expertise in one did not equate to expertise in the other. God did not call us to be astronomers, and all that.

            ETA: Though, perhaps you can quote a theologian who actually does what you accuse him of doing, and uses “God/angels/demons do it” as an explanation for some natural phenomenon?

            (And to be clear, I mean an explanation that excludes recourse to natural causes, not something like “God is the ultimate cause of earthquakes, but the proximate cause is [gas escaping from the earth, or some other naturalistic explanation]”.)

      • Tivi says:

        The sample of theologians I’ve come across do seem to spend a fair amount of work on angels and demons. Like this stuff: https://franciscan-archive.org/bonaventura

        Question 3: Whether an Angel is in an indivisible or point-like place?
        Question 4: Whether several Angels are together in the same place?
        Chapter 8: That the evil angels are not creators, though through them mages make frogs and other things; just as neither do the good Angels, even if through their ministry creatures come to be.
        Question 3: Whether demons can illude the senses?
        Question 3: Whether an Angel passes through a medium by a sudden movement, and/or by a successive one?
        Chapter 1: That souls have each a good Angel to guard them, and an evil angel to exercise them.
        Question 2: Whether a guardian Angel’s joy is amplified on account of the beatification of the one guarded?

        (And of course even more on God’s superpowers, personality and structure)

        I get the impression that early-ish christian theologians spent a ton of effort on rules lawyering about supernatural entities’ superpowers, relationships and motivations; and their output on stuff like theodicy and free will is what gets talked about, not because it was a particularly large part of their output, but because it’s the part that doesn’t look silly from a modern perspective.

        • The original Mr. X says:

          Question 3: Whether an Angel is in an indivisible or point-like place?
          Question 4: Whether several Angels are together in the same place?
          Chapter 8: That the evil angels are not creators, though through them mages make frogs and other things; just as neither do the good Angels, even if through their ministry creatures come to be.
          Question 3: Whether demons can illude the senses?
          Question 3: Whether an Angel passes through a medium by a sudden movement, and/or by a successive one?
          Chapter 1: That souls have each a good Angel to guard them, and an evil angel to exercise them.
          Question 2: Whether a guardian Angel’s joy is amplified on account of the beatification of the one guarded?

          And what percentage of St. Bonaventure’s total oeuvre does that represent? 1%? Less?

          Also, bear in mind that the original claim wasn’t just that theologians wrote about angels, but that “people treated theology essentially like science, and attempted to understand the world on the basis of the activity of various powerful non-human agents, gods and demons and angels”. Of the seven questions or chapters whose titles you quoted, six of them have nothing to do with the physical world, and the only one that does seems to explicitly deny angelic responsibility for natural phenomena (“the evil angels are not creators… neither [are] the good angels”).

          I get the impression that early-ish christian theologians spent a ton of effort on rules lawyering about supernatural entities’ superpowers, relationships and motivations; and their output on stuff like theodicy and free will is what gets talked about, not because it was a particularly large part of their output, but because it’s the part that doesn’t look silly from a modern perspective.

          I have actually undertaken formal study into early-ish Christian (note the capitalisation) theologians, and your impression is totally wrong. Few if any theologians devoted a considerable part of their works to angelology, and even fewer used “angels did it” to explain the natural world.

    • The original Mr. X says:

      For a long time, people treated theology essentially like science, and attempted to understand the world on the basis of the activity of various powerful non-human agents, gods and demons and angels.

      I’m going to predict that you haven’t read much actual theology.

    • Jaskologist says:

      Johannes Kepler, Harmonies of the World, the text where he gives us Kepler’s 3rd Law:

      For the Creator, who is the very source of geometry and, as Plato wrote, “practices eternal geometry,” does not stray from his own archetype.

      …it is consonant that if the Creator had any concern for the ratio of the spheres in general, He would also have had concern for the ratio which exists between the varying intervals of the single planets specifically and that the concern is the same in both cases and the one is bound up with the other. If we ponder that, we will comprehend that for setting up the diameters and eccentricities conjointly, there is need of more principles, outside of the five regular solids.

      The people who kicked off the Scientific Revolution believed that our Creator God was a lawful, orderly god, and if something appeared disordered in the world (as the planetary movements did, being from their perspective 6 stars that didn’t behave like all the rest) that was a sign they needed to look harder to discover the real harmony in the system.

      Turns out, that was a pretty good prediction by Kepler. His much more elegant solution to planetary movement (ellipses instead of epicycles) inspired others to look for deeper, more elegant, laws. That enabled a great deal of useful technology.

      I commence a sacred discourse, a most true hymn to God the Founder, and I judge it to be piety…I am free to give myself up to the sacred madness, I am free to taunt mortals with the frank confession that I am stealing the golden vessels of the Egyptians, in order to build of them a temple for my God

      Ibid., intro

      • Protagoras says:

        First of all, obviously we’re looking for a pattern, not anecdotes; the trend has been for scientists to become notably less inclined to wax theological over the past few centuries, and during that time scientific progress has become more rather than less impressive. But if the God hypothesis leads so naturally to ellipses, why is it that for thousands of years previous thinkers, all of them believers, had completely missed that possibility? It doesn’t seem like this quote has plausibly identified the source of Kepler’s insight (even if Kepler thought otherwise; introspection is of course notoriously unreliable).

        • The original Mr. X says:

          First of all, obviously we’re looking for a pattern, not anecdotes; the trend has been for scientists to become notably less inclined to wax theological over the past few centuries, and during that time scientific progress has become more rather than less impressive.

          Obvious alternative hypothesis: both of these things are due to increased academic specialisation, and have nothing to do with any conflict between theology and science.

          But if the God hypothesis leads so naturally to ellipses, why is it that for thousands of years previous thinkers, all of them believers, had completely missed that possibility?

          Well for one thing, there’s no such thing as “the” God hypothesis: there are different religions, some of which are more likely to give rise to the sort of intellectual background which would lead to the scientific method, and others of which are less likely. Graeco-Roman paganism, for example, with its multitude of conflicting spirits, isn’t really going to give you the belief in the lawful regularity of the universe which you need to get science off the ground; Plotinian monotheism might, but it tended to denigrate the physical world, and so disincentivise study of natural phenomena; and so on.

          For another thing, astronomy wasn’t treated as a branch of physics, and hence of (what we would call) science, until around Kepler’s time; before that, it was a branch of mathematics, so astronomers weren’t really interested with investigating the actual physical objects in the sky, only with predicting where they’d appear next, and the Ptolemaic system did that just fine.

          And thirdly, X can be necessary for Y without being sufficient. If I were to say “Charles Darwin wouldn’t have come up with the theory of evolution if he wasn’t interested in animals,” it wouldn’t be a very good rebuttal to say “Yeah, but what about all the other people who were interested in animals but didn’t come up with the theory?” Similarly, if I say that “Belief in a lawful Creator was necessary for the development of the scientific method”, “But what about the other thinkers who didn’t develop the scientific method?” would miss the point somewhat.

  16. zluria says:

    So I’m gonna address the elephants in the room here.

    1. To a total physics boor like my self, it seems totally obvious that the many-worlds interpretation makes far weirder assumptions than almost any other interpretation. Occam’s razor/common sense just seems to make it very unlikely, compared to other options.

    2. Is there some kind of “rationalist cult” dynamic at play here? I know that Eliezer Yudkowsky made a BIG point of preferring the many-worlds interpretation, e. g. https://www.lesswrong.com/posts/9cgBF6BQ2TRB3Hy4E/and-the-winner-is-many-worlds.
    Is this post a sort of apologetics post defending the dogma of the cult leader?

    • NoRandomWalk says:

      I’m not a scientist, or an elazer fanboy. (I’ve read and like his stuff, just dont assume something is true because he thinks it).

      What I think you’re missing is that in science there are notions of simplicity/mathematical complexity that have in practice been more useful than ‘human intuition about weirdness’. ‘The witch did it’ can feel like a super non weird explanation, but if you had to write down in lines of code what that meant it would be more complicated program running the world than ‘the little girl is lying’.

      Similarly, what seems to matter in modern physics has been ‘complexity of explanation, not bigness of stuff involved’. Something being infinite has more bigness, but is a lot less complicated than that thing being only exactly as big as we can observe it. Similarly, many worlds is ‘bigger’ but is mathematically a lot more simpler. The formulations of simplicity/occam’s razor that have proven some things, if applied analogously to quantum physics, suggest many worlds is plausible or even likely.

      • Faza (TCM) says:

        I’ve made this point elsewhere, but worth repeating here: if many worlds gives you simpler maths, by all means use the simpler maths. However, simple maths doesn’t obligate you to accept that there actually exist many worlds. The map is not the territory.

        • Akhorahil says:

          A complication here is that it seems almost as a universal that whenever we say “this thingy is merely a theoretical construction to make the math easier, you’re not supposed to believe in it”, said thingy later turns out to have a separate existence.

          • Faza (TCM) says:

            Well, then we’ll know, won’t we?

            This is Popper in a nutshell. If you’re able to tell the difference between a world where your proposition is true and one where it is false, you have a falsifiable hypothesis, even if it is never actually falsified.

            If you will never be able to tell the difference between the two, you have a ZEVO.

          • koreindian says:

            What falsifies the falsification principle? Looks like a ZEVO to me.

          • Faza (TCM) says:

            The falsification principle isn’t a truth-apt statement; it is a method. We don’t claim it is true (that doesn’t even parse), we might claim it is useful.

            Think about how we might try to formulate the principle, say: “Scientific theories should be falsifiable”.

            That is a normative statement. You cannot compare it to some independently existing feature of the universe and determine that it is true – unless you have some quantity of good lying around*, in which case, could I have some?

            We can, however, ask “what if our scientific theory isn’t falsifiable”?

            A non-falsifiable theory is one that it is true in all possible worlds. This means that whatever our observations, the theory will explain them. It seems neat, until we realize that this means the theory doesn’t actually tell us anything.

            Whenever we make a prediction, we’re cleaving out (at least) two possible worlds: one where our predicition is true, and another where it is false. We then wait and see if we predicted correctly or not, depending on which of the possible worlds was actualized.

            To say a theory is “unfalsifiable” is to say that there exists no observation that would indicate the theory is false. Therefore, an unfalsifable (and very simple) theory of gravity would go something like “objects fall down, or shoot up into space, or do any other possible thing we can imagine, plus some we can’t”. Such a theory would be immune from being disproved, but we couldn’t use it to make any predictions of how a dropped object would behave, because literally any behaviour would be consistent with the theory.

            It therefore seems that unfalsifiable theories aren’t particularly useful for predictions (or anything else, really), so why bother with them?

            Aside: Given that you can multiply unfalsifiable theories ad infinitum (every unfalsifiable theory fits all possible observed universes by definition), we have no method of selecting which unfalsifiable theories to give credence to, other than “all or none”.

            * Another way to formulate the principle would be “It is good when scientific theories are falsifiable” and compare it to a known quantity of good (see prototype measures in the metric system).

        • DocKaon says:

          The basic thing is Many Worlds doesn’t have simpler math. It has the exact same math because it predicts the exact same physical phenomena. It has different words to go along with the math. It says when you apply the Born Rule you’re determining the probability of which branch of the universal wave function you’re in as opposed to the probability of which state the wave function will collapse into.

          • Jonluw says:

            Correction: Many worlds has the exact same math as Copenhagen in calculations where quantum objects interact with each other.
            In calculations where an “observer” measures an outcome, Copenhagen uses different (ill-defined, I may add) math. Many worlds, on the other hand, uses the same math across the board and makes exactly the same predictions as Copenhagen.

    • smack says:

      I’m a mathematician with a (non-professional but significant) interest in quantum mechanics. Nothing about your interpretation of these events seems improbable to me. There are of course plenty of good arguments for the MWI. There are also some very strong arguments against it*, and the fervor with which it is held in some quarters does not seem to me to be commensurate with a reasonable evaluation of the evidence for it.

      *I’m on the fence about whether to flesh that out. On the one hand, it’s uncharitable not to perhaps? On the other hand, it has every chance of starting a long conversation that is tangential to the one I am responding to, and to Scott’s post. I am inclined then to let people use Google, but to flag here that it is true that nobody should update their opinion about MWI based on my unsupported assertion that there are strong arguments against it.

    • sty_silver says:

      Your intuition of weirdness could actually be taken as evidence for many worlds, because your brain was evolved to throw spears and dodge wolves and such, not to understand the fundamentals of the universe. Wouldn’t it be a strange coincidence if your intuition aligned anyway?

      Case in point, as others have said, weirdness is not a relevant metric. Complexity is the relevant metric, and MW is simpler than any other theory.

    • ajakaja says:

      When you study enough physics, MWI switches to look much simpler than the alternatives. The two dominate theories, are, roughly:

      Copenhagen: there are multiple states of existence in systems simultaneously UNTIL YOU LOOK AT THEM
      MWI: there are multiple states of existence in systems simultaneously

      Copenhagen gives up on explaining how we end up seeing one result or another, and says “well, there’s this thing where when you measure the result, it instantly collapses into a concrete state”. MWI says: “that’s silly, clearly all the states continue to exist, but once you have measured one you’re entangled with it so you can’t see the others anymore”.

      • Bugmaster says:

        I could be wrong, but doesn’t Copenhagen actually say, roughly, “there are multiple states of existence in systems simultaneously UNTIL YOU HIT IT WITH SOMETHING” ? There’s nothing special about “you looking” at things, it’s the external interaction (imparting or removing energy from the system) that’s the problem.

        • ajakaja says:

          Yeah, I was kinda making fun of it there. It is until you ‘measure’ it, and what measuring means is intentionally vague but it must mean “until you become entangled with it”. The difference is between what the models say happens to the alternatives after you become entangled with them. Copenhagen says they don’t exist, MWI says they do, for some definition of the word ‘exist’.

          I prefer not to think of us, or any machine we build, as having any special significance in the universe, so it seems obvious that MWI is simpler because our interactions with systems should be governed by the same rules that their internal actions follow.

          • Bugmaster says:

            You had me until this part, then I got lost again:

            I prefer not to think of us, or any machine we build, as having any special significance in the universe

            As I said, I’m not a physicist, but does the Copenhagen interpretation really privilege us in any way ? For example, if I leave the Schroedinger’s Box alone for a while, and eventually a meteorite falls on it and knocks off the lid, wouldn’t that collapse the wavefunction in the exact same way as if I intentionally opened it ?

          • ajakaja says:

            What the Copenhagen interpretation does is it punts on the problem of figuring out what measurement is. The problem is that you have to draw a line _somewhere_. It could be ‘a human observer’ (hence the people who draw connections between consciousness and QM) or could be “a sufficiently external system”, such as a laboratory or, yes, a meteor, or something with a large enough (?) energy difference (someone else mentioned Objective Collapse theories in these comments).

            MWI figures this isn’t a real problem. Figuring out the source of the probabilities we observe is a real problem either way (the Born rule), but figuring out what constitutes ‘measurement’ is unnecessary; we just end up entangled like anything else.

      • strange9 says:

        I mean, I recently got a B.A. in physics, and I’m not at all sure MWI looks simpler or more elegant. If you want to keep locality (and I think there are very good reasons to try and keep locality), you have to do very weird things in terms of ‘changes propagate out at some speed, and intermingle somehow with one another as they do.’

        I don’t know if that utter mess is better or worse than ‘in a superposition till [weird undefined thing called measurement happens], then it behaves like most things do,’ but it’s definitely not obvious.

        • ajakaja says:

          I don’t think MWI includes a version of “changes propagating at some speed”. What are you referring to?

          • strange9 says:

            Ah, sorry: MWI says that when you seem to collapse a wavefunction, you’ve really just ended up in a different universe, right? That each possible result of the experiment shows up in a different world (if we want to be a bit more technical we can say that . We either say that a: this goes for everyone, everywhere, at once (violate locality), or we commit ourselves (as these link text mwi proponents do), to something like, “Worlds
            split as the macrostate description divides inside the light cone of the
            triggering event. Thus the splitting is a local process, transmitted
            causally at light or sub-light speeds.” This is intensely and deeply weird (and not in a ‘oh wow, physics sure is wacky!’ sense, but in a ‘wow, this theory requires a lot of extra stuff, including a relativistic wavefunction that we wouldn’t begin to know how to create’ kinda sense).

      • eyeballfrog says:

        That still doesn’t explain why I’m entangled with this particular state of it.

        • ajakaja says:

          MWI doesn’t explain the probabilities we observe (the Born rule). There is still fundamental randomness in the theory.

          Ultimately, imo, the Born rule is the only part of fundamental quantum mechanics (ie not QFT) that is really unexplained still. (although I haven’t looked in a while!)

          • eyeballfrog says:

            But if you have to postulate the Born rule, you’re no better off than postulating wave function collapse. Both are adding an additional postulate to the theory to explain the same thing: why do we never actually see superpositions as superpositions?

            Of course, one can solve both problems at once with Bohmian mechanics (and have them replaced with exciting new problems).

          • ajakaja says:

            No, the Copenhagen interpretation postulates both the Born rule and wavefunction collapse. MWI only postulates Born.

      • migo says:

        I tend to think Copenhagen is way simpler. My impression is the following (is it correct?)

        Copenhagen: things do not exist in the traditional sense. Things ‘pop up’ into existence when observed. The probability of the possible values of things at the moment of observation is given by the Born rule. The evolution of probabilities is given by the wave function. The wave function gets a Bayesian update at the moment of observation.

        MWI: there are multiple worlds where things assume different values. The evolution of these multiple worlds along time is given by the wave function. But things get unclear now: how do we connect this with the Born rule giving probabilities of outcomes? The proportion of worlds where a thing has a certain value is given by the Born rule? And/or the Born rule gives the probability that an observer ends up in a world where the thing assumes a certain value?

    • viVI_IViv says:

      I’m not sure if it’s cult following, but I think many people here only read about quantum mechanics and its interpretations from Yudkowsky’s sequences, where, as he often does, he overstated his case and ignored or misrepresented alternative positions.

      You see people here repeating his incorrect characterization of Copenhagen interpretation (which is not, in fact, an objective collapse interpretation) or his ravings about Kolmogorov complexity (which doesn’t actually distinguish between theories that make the same predictions).

      • Viliam says:

        Here is how Wikipedia explains Copenhagen interpretation:

        the interaction of an observer or apparatus that is external to the quantum system is the cause of wave function collapse,

        and the objective collapse interpretation:

        collapse occurs either randomly (“spontaneous localization”) or when some physical threshold is reached, with observers having no special role.

        It seems to me they have more similarities than differences. Both assume that there is a thing called “collapse”. The former assumes it happens when someone with a PhD looks at the particle, and the latter assumes it happens regularly every five seconds.

        The thing Yudkowsky argues against is the collapse itself, not its timing.

        (Although the fact that no proponent of the collapse can say when exactly it happens and why, is also weak evidence against its existence.)

        • viVI_IViv says:

          My understanding is that the Copenhagen interpretation argues that the job of a physical theory is just to explain observations, hence the notion that something existing if it can’t have any effect on our observation is not scientific.

          So it’s not like a human watching a physical system does anything magical to it that causes collapse of an independently existing wavefunction, it’s that we can never step outside our human point of view, hence any time we interact with something there will be a measurement event somewhere that realizes one of the multiple possibilities.

      • koreindian says:

        “…his ravings about Kolmogorov complexity (which doesn’t actually distinguish between theories that make the same predictions).”

        Could you expand on this? Even if two theories A & B make the same predictions, if the shortest program that can output a description of theory A is smaller than the shortest program that can output a description of theory B, then A has a lower Kolmogorov complexity than B.

        • viVI_IViv says:

          If two theories describe the same function, then the shortest program that describes each of them is the same.

          • koreindian says:

            Same predictions != same function.

            Two theories can make the same predictions, but one could take more/different inputs, or postulate more objects, than the other.

          • viVI_IViv says:

            But in the end the algorithmic information theory framework consists of predicting the next bit of a string of observations from its observed prefix.

            It’s a black box framework: if two theories estimate the same probabilities for next bit given the same prefix, then algorithmic information does not distinguish them.

            There might still some intuitive sense in which one theory is “simpler” or “more elegant” than the other, but algorithmic information can’t formalize it.

          • koreindian says:

            You’re playing fast and loose with definitions.

            When we are talking about the Kolmogorov complexity of a theory we are talking about measuring the minimum length of programs that can produce a complete description of the theory. We are not talking about the minimum length of programs that can output the same predictions as the theory.

          • viVI_IViv says:

            When we are talking about the Kolmogorov complexity of a theory we are talking about measuring the minimum length of programs that can produce a complete description of the theory. We are not talking about the minimum length of programs that can output the same predictions as the theory.

            Then you are using it in an essentially useless way. I mean, you could write each theory in English and then, for a given choice of an universal monotone Turing machine, each of these English strings would have a well-defined (albeit uncomputable) Kolmogorov complexity, but this complexity would not have any obvious relation to their predictive power and it would very strongly depend on the choice of the Turing machine.

            Kolmogorov complexity of predictions is used in algorithmic information theory because if you weight all possible continuation of the observed prefix by a neg-log-probability equal to the length of all the halting programs that compute it, you obtain the Solomonoff distribution, which can learn any computable distribution asymptotically faster than any computable estimator. The Solomonoff distribution M(x) is connected to Kolmogorov complexity K(x) because it’s dominated by the probability of the shortest program: -log M(x) = K(x) + O(log(length(x))) . It’s all explained here.

          • koreindian says:

            …Right, and when you are doing Solomonoff induction, you keep all the hypotheses that are consistent with the data so far, but you give more weight to those hypotheses that have greater algorithmic simplicity. And you characterize the algorithmic simplicity of those hypothesis by their Kolmogorov complexity. That is distinct from asking what the Kolmogorov complexity of all the relevant observables is.

            Yes, the choice of UTM matters, but the UTM is fixed across all hypotheses when we are doing Solomonoff induction.

            Again, by assumption, all the hypotheses still under consideration have made the same predictions on all available observables. It is silly to say that algorithmic information “cannot make any distinctions” between two theories that give the same predictions given the same observables. That is precisely what Solomonoff induction does.

            Say we have the data “01010101…01”. One program consistent with this data is the one that computes and returns a binary string with 0s in even places and 1s in odd places. Another program consistent with this data is the one that computes a binary string with 0s in even places and 1s in odd places, but has 10 million lines of useless (psuedorandomly generated) operations before the computed string is returned. Both are consistent with the data, both always make the same predictions, but they are not both equally weighted during Solomonoff induction.

        • Jaskologist says:

          It is also uncomputable, which is a bit of a problem when you’re claiming it as the ideal metric.

          “In particular, for almost all objects, it is not possible to compute even a lower bound for its Kolmogorov complexity (Chaitin 1964), let alone its exact value.”

          • koreindian says:

            Thankfully we don’t need to make the perfect the enemy of the good, and can use some of the many efficiently computable approximations of Kolmogorov complexity instead.

      • dionisos says:

        or his ravings about Kolmogorov complexity (which doesn’t actually distinguish between theories that make the same predictions).

        MWI and Copenhagen will explain the same past data (which is the only data we could have), but I think it predict something very different in the future : All the superpositions will exists and you don’t have to select one.
        So the Kolmogorov complexity is indeed reduced in the MWI.

    • notpeerreviewed says:

      The post itself doesn’t actually claim that Many-Worlds is the correct interpretation; it claims that the Aeon article is using the wrong criteria to distinguish science from pseudoscience. Given that you’re using Occam’s Razor rather than falsifiability to argue against Many-Worlds, it seems clear that you agree with the central argument of the post.

      Edit: I guess there’s one section that sort of does claim MWI is 90% likely. But overall the post is more about the correct way to reason about theories than it is about MWI specifically.

  17. eigenmoon says:

    The “science” that endlessly pisses me off with its non-empirical models is economics. On one side we have regulation enthusiasts with their notorious predictive power (“The Federal Reserve is not currently forecasting a recession.” (c) Bernanke 2008) and foolproof plans for economic growth (“Alan Greenspan needs to create a housing bubble to replace the Nasdaq bubble” (c) Krugman 2002). On the other side we have deregulation enthusiasts who seem to actually get the job done but they don’t seem to have as much mathematical models as the regulation fans because most models are too crude and simply don’t work.

    So I believe that economists basically all know that regulation is bullshit but pretend otherwise because the elites want them to. But then comes Peter Turchin (yes, the one with historical cycles) and writes that all economists know that regulation is super great but pretend otherwise because the elites want them to:

    But why is there such an enormous gulf between what economists know and what they say in public? One possible explanation is that policies, such as free trade, while often harming broad swaths of populations, tend to benefit narrow segments of economic elites. Perhaps the critics from the left (and a few “heterodox economists”) are right when they charge that economists speak what the powers-that-be want us to hear.

    And he has a whole book of mathematical models to back this up.

    So you have two sides each claiming that everybody knows they’re correct but say otherwise to please “the powers-that-be”. And I think that if that’s science, then screw it. Science (unlike rationality!) is fundamentally a social project and as such it’s totally owned by Moloch. Let’s do more rationality and less science.

    • Hoopdawg says:

      “Alan Greenspan needs to create a housing bubble to replace the Nasdaq bubble” (c) Krugman 2002

      This meme has a soft spot in my heart and a permanent place in my memory, since at a certain point of time it led me to several important insights and discoveries. First – that many claims about simple facts are just bullshit(ting). Second – that it’s really simple to just independently verify that, and I probably should be doing it more often. Third – that people never really do that, and essentially get all their information from third-parties in self-established bubbles. Fourth – that they refuse to be corrected even after being presented obvious evidence.

      But I digress, the point is that in 2002 Krugman wrote an opinion piece (“Dubya’s Double Dip”, available online on NYT website, no point in linking it directly because paywall, but googling it and google translating from burmese to azeri did the trick for me) deriding Alan Greenspan’s work as Federal Reserve Chair. It was about how, to pull a representative quote from it, “Mr. Greenspan needs [a recovery] to avoid awkward questions about his own role in creating the stock market bubble.” Mr. Greenspan “thinks he can pull that off”, but will it work? Nuh-uh. To take one sentence off it and somehow misinterpret it as “Please, Mr. Greenspan, do a housing bubble now, only you can save us.” is a feat of extreme dishonesty and hutzpah. But someone did just that. And others repeated the claim. And repeated again. And again.

      (Not surprisingly, I am not the only person to notice this, so when I searched for the piece to re-read and cite it properly, the first result I got was a person making the same point more thoroughly, including pointing out that Krugman wrote a column about the danger of a housing bubble in the US a mere two weeks later. “Mind the Gap”, also in NYT, also available on its website.)

      I’ve already had this conversation several times, and so far nobody simply admitted he was misinformed. Not to be pushy, but I have a high opinion of this place’s intellectual honesty, so I’m hoping for a first here.

      • eigenmoon says:

        I’m doubling down on this.

        We have Krugman’s own defence:

        It wasn’t a piece of policy advocacy, it was just economic analysis. What I said was that the only way the Fed could get traction would be if it could inflate a housing bubble. And that’s just what happened.

        The post linked there by Krugman as “gracious, sensible explication” says:

        Krugman was mainly expressing pessimism. He was not cheerfully advocating a housing bubble, but instead he was glumly saying that the only way he could see to get out of the recession would be for such a bubble to occur.

        This is the most Krugman-friendly interpretation you could possibly defend. I’m not going to argue whether the article in question was cheerful or glum. A much more interesting question is whether pointing out that something is the only way out constitutes saying that we should do exactly that. He said that he stopped short of saying what we should do. But look, in another place he talks in terms of “should”:

        During phases of weak growth there are always those who say that lower interest rates will not help. They overlook the fact that low interest rates act through several channels. For instance, more housing is built, which expands the building sector. You must ask the opposite question: why in the world shouldn’t you lower interest rates?

        Also:

        Meanwhile, economic policy should encourage other spending to offset the temporary slump in business investment. Low interest rates, which promote spending on housing and other durable goods, are the main answer.

        Finally, I have a reddit post that links to a video; with my setup, the video doesn’t play, but here’s what the redditor heard from Krugman in the video:

        To be honest, a new bubble now would help us out a lot even if we paid for it later. This is a really good time for a bubble… There was a headline in a satirical newspaper in the US last summer that said: “The nation demands a new bubble to invest in” And that’s pretty much right.

        This is from 2009, so he talks about a different recession, but it’s the same year he posted the defense (quoted above) that he never advocated for a bubble in 2002.

        Add-on: I’ve read “Mind the Gap” and disagree with your interpretation. Krugman says there:

        And yet the Fed chose not to cut rates on Tuesday. Why?

        He admits that there might be a bubble and wants the Fed to inflate it even further. This is not an article about the danger of the bubble. This is an article about the danger of deflation that might happen if the bubble bursts while it’s small.

        • Hoopdawg says:

          It wasn’t a piece of policy advocacy, it was just economic analysis. What I said was that the only way the Fed could get traction would be if it could inflate a housing bubble. And that’s just what happened.

          This is indeed an accurate summation of the article in question (and of subsequent events). If it’s “the most Krugman-friendly interpretation you could possibly defend”, it’s by virtue of being the only defensible interpretation, period. It’s not exactly accurate to describe what you linked as a defense either, as much as Krugman just off-handedly stating that, for the record, this is what actually happened; “guys, read it again”.

          He admits that there might be a bubble and wants the Fed to inflate it even further.

          You are essentially assuming that forming of the bubble is the only conceivable result of lowering interest rates. I am pretty certain that Krugman does not share this assumption, and the article makes it sufficiently clear that he considers the state of real estate market to be entirely separate from monetary policy.

          One can, in fact, advocate lowering interest rates to kickstart investment and spending, and at the same time not advocate directing the funds into a domain one realizes to be driven by speculation rather than healthy demand (as evidenced by, you know, using the term “bubble” alone). And one can certainly advocate investing in housing when there’s healthy demand for it (e.g. “at the verge of the launch of the Euro”, i.e. in Europe circa 1992), and not advocate it during a bubble. Essentially your entire reasoning hinges on pretending otherwise.

          • eigenmoon says:

            Huh.

            Could you please summarize what’s the difference between my quote that made you jump at me (“Alan Greenspan needs to create a housing bubble to replace the Nasdaq bubble”) and Krugman’s explanation that you see as the only defensible explanation, period (“the only way the Fed could get traction would be if it could inflate a housing bubble”)? I really don’t see what the fuss is all about.

            he considers the state of real estate market to be entirely separate from monetary policy.
            You’ve lost me there. That contradicts every Krugman quote that I’ve brought.

          • Hoopdawg says:

            Could you please summarize what’s the difference

            I don’t think that me calling one “an accurate summation” and “the only defensible interpretation” of the other implies I believe there’s a difference.

            The fuss is about you ignoring the implicit, but obvious from context, “while keeping the interest rates high” caveat, and appending the not-present-in-the-original “and therefore that’s what he/it should be doing” instead, thus completely bending the meaning of either of those sentences.

            That contradicts every Krugman quote that I’ve brought.

            No it doesn’t. I realize you think it does, and already wrote an entire paragraph (the last one in my previous post) explaining why I think you’re wrong about this.

          • eigenmoon says:

            @Hoopdawg

            You’re not making any sense whatsoever to me.

            implicit, but obvious from context, “while keeping the interest rates high” caveat
            What?! That contradicts 3 of 4 Krugman quotes above.

            and appending the not-present-in-the-original “and therefore that’s what he/it should be doing” instead
            Seriously? The difference between “the only way out is to do X” and “the only way out is to do X, so that’s what should be done” is for you “completely bending the meaning” and also earlier “a feat of extreme dishonesty and hutzpah”?

            already wrote an entire paragraph
            You’ve written that Krugman doesn’t advice people to invest in housing. OK, but that has nothing to do with “he considers the state of real estate market to be entirely separate from monetary policy” which does contradict 3 of 4 quotes above.

    • ajfirecracker says:

      There is real science to be found in economics, but it mostly doesn’t make the news, and the macroeconomic analysis you’re talking about is generally the least rigorous part of the discipline (in my view)

      On the topic of recessions and bubbles in particular, I find the Austrian Business Cycle Theory to be convincing, but I am aware that it is generally not super formalized and is regarded as heterodox

  18. Ben Wōden says:

    The article currently says this:

    Nor do I think it seems right to say “The discovery that all of our unexplained variables perfectly match the parameters of a sphere is good, but the hypothesis that there really is a sphere is outside the bounds of Science.”

    This seems to me to make no sense, and I think there should be an “is bad” at the end of it. That would make the sentence make sense and tie in which what seems to be being argued. Is this an error or have I misunderstood the whole point?

    • g says:

      You have misunderstood. Let me break down the structure a bit more explicitly.

      (The discovery that (all of our unexplained variables perfectly match the parameters of a sphere)) is good

      but

      (the hypothesis that (there really is a sphere)) is outside the bounds of science.

      You don’t need an “is bad” to parallel the “is good”; our hypothetical sphere-skeptic is saying “well done, you found a nice pattern, but if you go beyond that and say that there really is a Sphere then you’re no longer doing science”.

  19. slate blar says:

    How do these beliefs affect the probability weights you place on the material future of the world?

    tldr; Satan-palaeontology is slightly malign, it adds a bit of low probability event noise. QM metaphysics is benign. The material future is the same regardless of which QM interpretation is ‘true’.

    Ignore true or false, think just about predictive usefulness.

    If you believe in Satan-palaeontology then you have the same weights as believers in regular palaeontology(he’s trying to deceive us and so far has behaved like the theory, so he’ll probably keep doing that) + a tiny extra weight in weird paradigm shifting discoveries(he might put a human fossil in 2 billion year old dirt just to fuck with people) + a tiny extra weight on a future in which Satan reveals himself and does all sorts of weird things. You can’t predict when Satan will stop hiding behind the physical – or I guess Satan-physical – laws of the universe, so you might as well put a high probability on those laws just working in the future.

    If you believe in some interpretation of quantum mechanics over another… If they don’t differ in any empirically observable predictions then you probability weights for material futures shouldn’t change between them – I guess, aside from the trivial human-social aspect: the professors who win the argument have more money, people will keep debating this etc.

    If you like minimizing micro-probability events from your future predictions you don’t like Satan-palaeontology: it adds a bit more noise to your plausible future tree. But also you just don’t care about QM metaphysics. It makes no physical predictions, so it doesn’t matter.

    Slightly different angle: useful theories let you lower the number of probable futures. Predicting the future is actually excluding the impossible and looking at what’s left.

    Knowing nothing about the next second, all possibly imaginable futures are equally probably. The computer screen might melt. One pixel might break. The air might turn to pudding. The possibilities are so hyper-infinite that we ignore all of them. Knowing nothing about the future means absolutely any imaginable future is equally possible, so we can’t really think about the future at all.

    When humans “predict” the future, we’re actually filtering out low-probability(aka impossible) futures. A scientific theory is very good at this. A hydrogen atom weighs x. A glass of water weighs y. We reduce an infinite range of possibilities to a fixed number(when rounding to a few decimals lets say).

    Believing in supernatural agents can, in theory, be useful at this. Things will happen in accordance to the narrative logic of their holy texts. You expect the good to be rewarded and the bad to be punished. But then that doesn’t happen. So the predictive usefulness is lowered and you abandon it. Especially since science seems to be so good at predicting correctly, though it restricts itself to certain types of predictions.

  20. JPNunez says:

    The priors for Satan and Atlantis should be pretty low, but somehow the priors for multiverses are not that low? That seems off.

    Imma gonna wait until quantum gravity is squared out before starting to believe in evil counterparts with goatees.

    • sty_silver says:

      Why? Many worlds is a simple theory. Satan is a very complicated theory.

    • Viliam says:

      We already have evidence that quantum physics creates “local multiverses” on a tiny scale. (Or even large, if we move the entangled particles away from each other.) The question remains whether the multiverses “collapse” when they grow larger than a few particles, or keep existing.

      In other words, whose priors for multiverse are we talking about here? A person who never heard about the double-slit experiment? Or someone who already accepts that “something weird” is happening, only doesn’t know what exactly?

      • JPNunez says:

        Well yeah, when the double slit experiment was first brought up, priors for multiverses must have been low. Considering alternatives exist to it, going immediately for multiverses with only the evidence of quantum experiments for it is questionable.

        Imagine somehow Young Earth Creationists slowly and silently took over paleontology; one day they unearth a skeleton, call it “Homo Satanicus” and go “this fossil can only be explained by the presence of Satan”.

        Then you’d be replying “whose priors for Satan are we talking about here? the ones of someone who never heard of Homo Satanicus?”.

        Paleontology’s results are backed by results from other branches of science; geology, chemistry, biology, etc. So Homo Satanicus would be looked at critically by other, non Young Earth Paleontologists, but with multiverses not even all physicists are convinced, so yes, I am going to be skeptic a little longer.

  21. Brassfjord says:

    “SCIENCE!” also gets a lot of credit that should go to “TECHNOLOGY!”.

  22. theifin says:

    I think it is useful to consider questions of “overfitting” when assessing theories in terms of their elegance. Suppose I tell you that the masses of the electron, muon, and tau are related to parameters of a sphere. You go “wow, how elegant: that must mean something!” Then I tell you that I found this relationship by writing a program that takes three input numbers and searches for any expression that can relate these numbers, using any value from the big list of mathematical constants, and any of a range of mathematical operations. You ask for a copy of this program and find that, no matter what random set of three input numbers you give it, the program returns a simple relationship between those numbers. You conclude that the “parameters of a sphere” relationship between particle masses is not especially meaningful, but is most likely a consequence of this extensive search for a “best fit”.

    More generally, the model of how science works (and how scientific theories are tested) that Scott uses in this post seems to give too much weight to redrodiction ( “fitting” results that we already know or already assume to be true). A worthwhile scientific theory is one which makes surprising predictions (which predicts things are contrary to expectations when the theory was created): if those surprising predictions are confirmed that gives support to the theory as a useful construct (it has demonstrated its value by correctly predicting something that we wouldn’t have predicted otherwise). Every viable candidate theory is going to be consistent with currently known results (because the people who propose those theories already know about those results, and so “fit” their theories to those results). We can only distinguish between currently viable theories by testing their surprising predictions. A theory that doesn’t make any surprising predictions about things we don’t yet know is “not even wrong”: it doesn’t improve our predictive ability in any way.

  23. liskantope says:

    Quick corrections on the math involving circles and spheres: if we let x be the radius (not diameter), then the circumference of a circle of radius x is 2πx, and the volume of a sphere of radius x is 4/3 πx^3 (not 4/3 πx^2).

    • Don P. says:

      Also — and this is probably both missing the point and wrong, but… — if one of the masses is x, I don’t think you can have an x^2 (or higher!) value as another one of the masses, for dimensional analysis reasons. (Among other things, if you choose your units “right”, x^2 can be smaller than x.)

      • liskantope says:

        That seems like a valid point. I think you could have something times x^2 as the measure of another quantity related to the particle whose mass is x, and that this could possibly be used as a point in favor of certain untestable models in the spirit of Scott’s post. That’s basically exactly what happens with gravitational force between two objects being inversely proportional to the square of the distance (call it r) between the objects. To the best of my understanding (which for this kind of physics is extremely naive at best), this is used to help justify the idea that gravitational force happens in the form of particles called gravitons which emanate from objects in all directions: the r^2 factor would then come from the surface area of a sphere of radius d being 4πr^2.

      • StellaAthena says:

        In physics, constants can have units. For example, the gravitational constant G in the expression GMm / r^2 has the rather crazy units of N m^2/kg^2. It’s difficult to use this to get around Scott’s specific example, because the geometric formulae are derived from each other as follows:

        Assume 2πr is the circumference of the circle. The area of a circle can be split into thin concentric rings of inner radius r_1 and outer radius r_2. Taking a limit as thickness goes to zero gives us that the sum of the circumferences of the infinitesimally thin circles is the area of the outer circle, so we compute the integral of 2πr dr = πr^2 to get the area. A similar trick has us integrating area with respect to position to get volume. The integral of 2πr dr = 2/3 πr^3, and we need to double this because when you do out the math you see that the integral only actually covers half the sphere. The 3 is introduced from the dimensionless exponent and the doubling factor is likewise dimensionless, so we have units of m^3 for area. If you’ve taken integral calculus you’ve seen this idea before, probably referred to as an “area of a surface of revolution” problem.

        We can get the 2πr formula to come out in units of kg by assigning the 2π bit the units of kg/m, but then the other particles has masses with units kg m and kg m^2 respectively. Note that the exponent reflects the number of factors of r, and so giving it units seems extremely dubious. If there were a way to justify giving it unitless, then assigning it the dimensions 1/m would (together with our units for 2π) give all three particles the correct units.

        But yes, you are missing the point 😛

  24. I would say that empiricism is larger than individual tests and also includes our understanding of the entire world. The proof for human beings is stronger than the proof for Satan or aliens, so the case for human beings having built something in the past is stronger than that for Satan or aliens. The things we have greatly more proof to be true are first in line for definite (or if we want to be really particular we could say that once we’ve ex nihilo assumed that our eyes are telling the truth most of the time, then the rest follows most of the time).

    Satan and aliens have not passed empirical tests of their existence more generally, before we even get to speculating on what they might have done specifically. Dinosaurs are extremely novel creatures but they are a lot closer to things we already understand than things we have scant proof for. If a creature existed in the past, speculation about it having properties more similar to existing animals than to Gods carries more weight. The elegance criterion that says God adds an extra “weird thing” that is not necessary, depends on an understanding of what a “weird thing” is, and “weird things” are those you don’t expect to encounter. The elegance criterion converges with the empirical one.

    There could be plenty of weird things we have never encountered before mind, but their weirdness is constrained by needing to have behaviors that affect things we already understand, otherwise they become non-explanations, and things that can’t theoretically be explained don’t exist in a very real sense. Explanations at the end of the day are only the breaking down of something new into chunks of what we already understand, so an explanation is a chain of relations between the categories of things that goes all the way back to when you first opened your eyes as a child. If something can’t be explained, we can’t keep asking “why” to get back to the level of things already known, and since things already known include everything physical with three dimensions with properties that affect other three dimensional objects, this effectively means that something which can’t be explained either isn’t real or can be effectively treat as unreal because it has no physical properties.

    Even conventional quantum mechanics as wacky as it is portrayed, shows us a world where a photon can be a particle or a wave, two things we understand in our everyday world. Quantum mechanics is only spooky in the way that these categories morph. Meanwhile, it’s not clear what “another universe” or a “multiverse” even is. What does this mean? If it refers to some extra 3D space then it is already easily understood as part of our own universe. If it’s indeed something wrapped up in tiny dimensions all Tardis like, then we are going to start to have problems, because dimensions exist as a coordinate system, and yet no objects can be measured to have anything less than three dimensions, and if something can’t be measured it is a point particle.
    Many things in science are effectively analogized to extra-dimensions without that describing some real space somewhere in which physical objects move in 4D coordinate space. In robotics, dimensional analysis is used to plan the movement of multiple joints. In material science certain materials are described as two-dimensional materials because they are constituted of a single layer of atoms. It becomes useful to treat things as if they have more or fewer than three dimensions in certain cases where the objects involved very much have three dimensions, so it’s not surprising that the concept takes on a life of its own.

    In order for it to mean something it has to fit into the chain of relations that links it to mundane things. If it’s outside of that chain of relations, then it becomes something that can only be gestured at vaguely and barely described at all in language. If the math makes the relations clear already, then gesturing at multiverses in no way makes things clearer. No new insight is gained, because insight is when a “weird thing” is chained to an ordinary thing. Mathematically, it might be elegant to relate some properties of particles to an nth dimensional hypercube, but unless you can pluck me a hypercube from the hypercube tree, you should probably stop acting in ways that encourage pop science journalists to write articles about escaping to other universes.

  25. kaathewise says:

    Maybe it is slightly off-topic, but I’ve been trying to wrap my head around the multiverse idea for years, and I still cannot make a coherent model out of it. The main issue that I cannot crack open is how to reconcile the postulated realness of all the universes with probability.

    To give an example, let’s say we have a particular quantum experiment with 2 equiprobable outcomes, 0 and 1, that we are repeating N times (e.g. Schrodinger’s Cat). It yields a branching tree of universes with 2^N “resulting” universes that would have 2^N different histories of this experiment, from 00…0 to 11…1. If all these universes are equally real, we should not be able to tell anything special about our history, it can be any sequence of 0s and 1s. But since we know that the experiment should yield 0 or 1 equiprobably, our history should be quite special.

    If we take a “singleverse” view, the answer is quite simple: yes, we have many potential universes, but only one of them is selected as a special real universe, and the probabilities we talk about are simply the probabilities of this or that branch being selected.

    If, however, we postulate that all the universes are equally real, i.e., after the experiment there exist two identical universes that are indistinguishable from each other apart from the experiment result, then there is no selection going on, and when there is no event, there is no probability!

    You could try patching it up suggesting that yes, all 2^N universes are equally real, and we could be in any one of them, which makes all 2^N histories equally probable, which agrees with our understanding of the experiment’s probability, but this has 2 problems:

    1. Phrase “we could be in any one of them” is somewhat misleading. Since all universes are real, we actually are in all of them, remember, there is no selection happening that marks one of the universes as “actual”.

    2. This would only work with 50:50 experiment outcomes. If the outcome is uneven, say 1:2, then we can’t get away with simply counting the resulting universes, we have to assign them “weights”, and then it is not clear, what is the mechanism that would take those “weights” into account, if not the good old “select one and call it real”.

    I am yet to find a good answer to this question, would appreciate if anyone has any ideas regarding.

    • Robert Jones says:

      I don’t see any great difficulty in saying that after conducting the experiment N times, there are 2^N kaathewises, all of whom are equally real, but none of whom have conscious awareness of the others. From the point of view of any particular one, his universe has the special property that he’s in it, but it isn’t more real than the others.

      It’s worth noting that you take the entire rest of the universe along with you: if there is a civilisation of intelligent beings in another galaxy, there are now 2^N of each of those beings, even though all 2^N copies are identical for their purposes (since they have no way of knowing the outcomes of your experiments).

      If the probabilities are 1:2, I don’t think that causes any great difficulty either. We just have 3^N universes, where at each branch there are two universes for one outcome and one for the other, and all universes are equiprobable.

      As I say in my other comment, what seems to me a difficulty is that the outcome distribution of quantum events is typically continuous and not discrete, so we need a continuum of universes for each experiment, i.e. 𝖈^N universes. While I suppose that can’t be excluded, the metaphysical extravagence seems absurd.

      • kaathewise says:

        I appreciate your observations regarding continuous events, which do indeed make the situation even worse, but I feel that I have not got my point through very well.

        The really special thing about our universe is not that we are in it. The really special thing is that our history contains about the same number of 0s and 1s, the left turns and the right turns, which can’t be said about any universe.

        In the “singleverse” model we select this universe or that one, either equiprobably or with weights, and mark this selected universe as “real”. When there is selection involved, we can talk about the properties of the universe that gets selected, and about probabilities of this, or that.

        If there is no selection involved, however, there is no “marker” that is moving left or right, then there isn’t even an event you can assign probability or frequency to.

        • eigenmoon says:

          I do not understand your point.

          Suppose that you toss a coin 4 times. The probability that all tosses will give you the same result is 1/8.

          Now suppose that you get 16 volunteers and assign them 4-bit indices from 0 (0000) to 15 (1111). Then you ask whose 4 bits are the same. The result is 2 of 16 = 1/8.

          The math in those cases is identical. In fact, the way probability theory is dealing with coin tosses is by reducing them to the multiple volunteer type of situation. There is no “marker” or “selection” in probability theory. A probabilistic variable is simply defined as a function from a probability space, which is basically the set of possible universes.

          Replace 4 with a large number and you’ll get “about the same number of 0s and 1s”. This is not making our universe special. In fact, it’s to be expected from most universes.

        • Robert Jones says:

          You haven’t actually run the experiment though, so you can’t say whether our history contains about the same number of 0s and 1s. You would agree I think that there’s a 1/2^N chance that if you did run the experiment you would get a sequence of N 0s. For sufficiently large N, it’s overwhelmingly likely that we would see about the same number of 0s and 1s, but that’s just because (on this hypothesis) it’s true of the overwhelming majority of universes.

        • strange9 says:

          I’ve often wondered about this w.r.t. irrational probabilities (like root 2 or something). At that point the model of ‘discrete universes, certain number of them for each universe’ no longer seems to make sense at all, since the appropriate amounts can’t be expressed as a ratio at all.

    • eric23 says:

      It may (or may not) help you to read up on the different definitions of “probability” (for example this link) which philosophers have formulated.

    • smack says:

      This is indeed probably the most significant problem with the MWI. See the work of Adrian Kent, for example, who has written in quite some detail along the lines you consider. In general, the project to get Born’s rule out of the MWI is in quite a depressing state these days, in my subjective estimation.

    • ajakaja says:

      I am not sure I understand your objection here.

      The correct interpretation, per MWI, is roughly:

      1. we could be in any one of the 2^N results
      2. this continues to be true if the probabilities are not 50:50
      3. but it’s literally not true that it’s just “one of 2^n universes”, because until you are entangled with a particular outcome, the 2^n universes you are in can interfere with each other (this is the point of the double-slit experiment — in this case, run N times in a row with one particle each time).

      So it is quite literally true that you are in a superposition of the resulting states until you do something that distinguishes which one you are in (note that in practice this has to be an atomic scale due to decoherence).

    • spyder says:

      A very interesting point that you are making.
      Two things I came up with:

      1. Which kind of events “count” for splitting the universe anyway? A coin toss and its probabilities are such a high level abstraction, I don’t see how the physical world can react to that in a fundamental way.
      If only quantum events are relevant, then we already see all the probabilities merging into a continuous outcome, i.e. no “splitting” happening since there are no distinguishable events.

      2. I concur with your point about uneven splitting. In each of these multiverses, reality is strangely different. Observe the fringe case: The multiverse where every coin toss results in a 1. Wouldn’t the probabilities inside that particular multiverse be different? All the coin tossers there can expect is another 1 next time. So thats a very different reality and it also doesn’t split anymore, since there IS no probability anymore.
      But all that is nonsense, since we had the prior that there is this hyperparameter “probability of a coin toss” which is above all the multiverses. So where did that probability come from at all then?

      I find it hard to express, but somehow the abstract concept of probability is just that – an abstract concept for the description of observations and predictions. MWI seems to be applying probability in a too literal sense and the outcome is even weirder than quantum magic.

      Does that make any sense? English is not my native language but I love the discussions here and want to join in 🙂

      • kaathewise says:

        1. I am only using discrete splitting for simplification, if we can understand what is going on in the discrete case, we can surely generalise it for the continuous space of possibilities afterwards.

        2. I understand what you mean, but I think my claim is slightly different, at the point when we start these experiments we already have a model about their nature, and we can assume for simplicity that we know the exact wave function that governs this experiment. This knowledge is above the multiverse tree that we are considering, i.e., it is not affected by the outcome of a particular experiment, on the contrary, we think that it represents a deeper truth about reality than any individual outcome.

        And now the question is, how exactly this deeper truth would sound like in MWI?

        As I said, in a “singleverse” model the truth sounds like this: “in a universe that gets selected as real you will observe the results of experiment as if it was adhering to this probability”.

        But in MWI, there is not one special or real universe, so the phrase above doesn’t quite make sense, and it’s unclear, how to remake it so it would.

        • spyder says:

          1. ok, I’ll go with that.

          2. This all sounds a lot like “usual” statistics. When trying to measure a certain property, each measurement will be different but of course within a certain distribution. But of course only one mean exists.
          In this analogy, MWI would be akin to saying there are any number of valid means and therefore none of them are special and the whole concept of “the mean” loses any meaning.

          The argument, that the underlying nature of these experiments is above the multiverse-tree doesn’t really help me here. It seems like it introduces just another level with the same problems, just like hyperparameter optimization. What am I missing?

  26. Robert Jones says:

    This may be naive, but it seems to me that this post is making “science” too large. That Caesar crossed the Rubicon is a fact, but, at least as a matter of ordinary usage, it’s not a scientific fact. You can’t conduct an experiment to verify that Caesar crossed the Rubicon, nor does it form part of some wider understanding of how the universe works.

    The Rubicon which Caesar crossed may or may not be the same as the modern river of the same name. Wikipedia tells me, “While it has not been proven, some historians believe that the two rivers are indeed one and the same,” but adds “citation needed”. I haven’t looked into it further, so I am not sure what the real state of the argument is, but there will be some fact of the matter. One can look at documentary and archaeological evidence, and judge certain pieces of evidence as more or less compelling. There is no disputing definitions, but it seems to me odd to call that investigation “science”, but nor would we call it “pseudosicence”.

    Similarly, it seems to me that the question “Who built the Great Sphinx of Giza?” is a question of history rather than science. There is some fact of the matter, but Egypt in the 26th (or 105th!) century BCE is more obscure to us than Rome in the 1st century BCE. AIUI, the evidence attributing it to Khafre is purely circumstantial. At the object level, I think 80% is far too high a probability for this hypothesis and most of the probability space should be taken up by “someone else”. We don’t know who built the Sphinx: maybe it was Khafre. Post-humans with a perfect knowledge of science could well remain ignorant on this point. Conversely it is possible that decisive evidence might be dug up next year.

    Therefore it doesn’t seem right to dismiss the Orion correlation theory as ‘pseudoscience’. If anything, it’s pseudoarchaeology.

    I don’t want to say “No hypothesis can be tested any further, so Science is useless to us here”, because then we’re forced to conclude stupid things like “Science has no opinion on whether the Sphinx was built by Khafre or Atlanteans,” whereas I think most scientists would actually have very strong opinions on that.

    It seems to me that this goes wrong in a number of ways. Firstly, while it’s right that we may have all the evidence we’re going to get on the Khafre/Khufu question, we’re obviously not in that position on the Orion correlation theory. Whether the Sphinx is 4,600 years old or 12,500 years old is sort of thing that is definitely open to archaeological investigation. Secondly, scientists may well have strong opinions on the plausibility of the Orion correlation theory, but that doesn’t require us to say that science has an opinion. Lawyers may well have strong opinions, but that doesn’t make it a legal question.

    For that reason, I don’t think that this sheds any light on the MWI question. There are two questions there. Firstly, is the principle of parsimony a valid scientific principle? Secondly, is MWI parsimonious?

    Back in ’93, Paul Davies described MWI as “cheap on assumptions but expensive on universes”. Schrodinger’s cat is unfortunately misleading here, because the cat is in a binary state, dead or alive, so it’s easy to imagine two parallel universes corresponding to these states. But this isn’t how quantum effects usually go. In the classic double slit experiment, what is MWI saying? Is there a universe for every position where the electron can hit the detector? Does the probability distibution of the outcomes correspond to a probability distribution of universes, in which case what does that mean? Or are there actually lots of universes for each possible outcome, weighted in accordance with the probabilities?

    The supposed advantage of MWI is that it avoids giving any special status to measurement events, but in that case, the fact that we were conducting an experiment is irrelevant to the splitting, which would occur in the same way whenever an electron (or a neutrino or anything) interacted with anything, i.e. an inconceivably vast number of times in every second.

    • sharper13 says:

      Agreed. “We don’t have enough evidence to know for sure” appears to be underestimated in the answers given to the questions discussed.

      Given explanation Paleontology for fossils, what’s it useful for? We can use it to find this stuff in predicted locations. Great! Is Paleontology 100% true and a complete only possible explanation? No, we don’t know that it is or isn’t, because it’s a prediction about events which happened in the past, not in the future. (It is useful as a theory, though, which is more important for our purposes.) We can increase or decrease our beliefs about the probability of Paleontology being true via various pieces of information over time (someday, “Make your own fossils from the dinosaurs you created for your personal planet!” gets you almost all the way there), but in terms of “What’s true?”, many beliefs aren’t provably true or not.

      Even if you had a time viewing device (Say someone made you a device which hyperspace projects a large enough perfect telescope out however far needed to capture light radiated from past events and displays them on a screen for you) which appeared to show you the past and you could watch a dinosaur turn into a fossil over time, that would make it pretty strong on the useful scale, but it’s still not known to be 100% true, right?

      Because a “more elegant” explanation for the time viewing device you’re using might be that someone mocked up a fake using AI video tricks to create the scene in real-time as you attempt to view it.

      Unless they are in Australia, no one reading this comment knows for sure Australia currently exists as a functional nation. It’s a high probability, sure, but it’s also possible China just nuked them a few seconds ago and you just haven’t heard the news yet. It’s a low probability, sure, but do you actually “know” Australia exists as a functional nation right now? No, but it’s likely to be a useful assumption for you if you want to import kangaroo hats (no idea if that’s a thing or not).

      In short, it’s really tough to actually “know” most things are true. At best, we’re dealing with probabilities and most of the time most people overestimate the probability of their beliefs being accurate. So it’s better to judge theories like those described based on how useful they are for a purpose (like predicting X, or explaining why Y instead of Z) rather than how possible they are, because the possible explanations which fit with the evidence we have about something might literally be infinite.

  27. FormerRanger says:

    Isn’t it the case that a proposed (by actual physicists) alternative to the MWI is that all the events are already determined? The spawning of another “world” doesn’t happen. The cat is already dead and has been since the beginning of time. This requires removing the specialness of “now,” which is a bridge too far for many, but it prevents the metaverse from filling up with universes that differ only the behavior of a single beta particle.

    • arbitraryvalue says:

      The superposition of “dead cat” and “live cat” is different, in a way that has been verified by real-world experiments, from a cat that is secretly either alive or dead. (See hidden-variable theory.) In other words, “a cat that is either dead or alive” is a real thing, not just a representation of our uncertainty.

  28. Nicholas says:

    What is the purpose of science? I thought it was a system for generating knowledge that more or less exposes and strips away our subjectivity and biases. Therefore, a principle of accepting as science things which subjectivity play to our biases (“makes sense” “is simple” “requires a less radical shift to our world view”) is self defeating.

    If science need not be testable, is there any question that would fall outside the scope of science? I’m having a hard time thinking of anything that would be, and then the question becomes, how is science then different from anything else? Everyone’s stoned college buddy was actually doing rigorous science that whole time? “Like, what if when I see the color red, you see it as green, man?” Is just a non testable theorem, but that’s not a reason to not call it science!

    I think it’s also incredibly important to point out that the ‘devil distributing disingenuousness dinosaur detritus’ theory of paleontology was developed precisely because it seemed like a more elegant solution that caused less friction with the prevailing world view at the time. Under Scott’s proposed definition of science, we would be forced to also conclude that the devil theory of paleontology was “valid science” for many decades, which seems like the exact kind of absurd result that would cause friction with our world view, and therefore under the proposed definition, be rejected.

  29. ButYouDisagree says:

    (Atheist) philosopher Thomas Nagel weighs some similar issues and concludes that there’s nothing wrong with teaching intelligent design alongside natural selection. link

    Suppose the scientist came up with their Extradimensional Sphere hypothesis after learning the masses of the relevant particles, and so it has not predicted anything.

    Kahn, Landsburg, and Stockman have an interesting paper on whether a piece of evidence provides more support to a theory if it is discovered after the theory is generated. They argue that this depends on the process by which scientists choose research strategies and make predictions. link

    • Viliam says:

      (Atheist) philosopher Thomas Nagel weighs some similar issues and concludes that there’s nothing wrong with teaching intelligent design alongside natural selection.

      Using the arguments he makes, along every scientific theory, there should also be given equal space to a competing theory “no, it’s not”. Not just for evolution, but… well, everything.

      If something is falsifiable, then it’s possible in principle that it is wrong, and therefore the theory “it is wrong” should also be taught at schools. Because time is not a constraint in education.

    • greghb says:

      You could in principle set up an experiment where you raised a bunch of physicists, from children, in a Truman Show world where only 90 of the 100 particles were known and the physicists didn’t have the budget to build the collider needed to find the missing 10. Then you see what theories the physicists come up with and what predictions these theories let them make. It’s good evidence for the Extradimensional Sphere hypothesis if the physicists who come up with it accurately predict the other 10 particles. (And, perhaps, no physicists without the ES hypothesis correctly predict the missing 10 particles.)

      This won’t always work and I don’t think it obviates the need for Occam’s Razor. But there are some cases where you can use a counterfactual imagining of how the research could have played out to recover some testable hypotheses. I believe the paper you link cites the same sort of thinking.

      • sclmlw says:

        The problem is that this is only a test of how much ingenuity physicists can bring to bear on a problem, not a test about the physical universe itself. A direct analogue would be to take a time machine back to pre-Copernican geocentrists and give them accurate measurements of future planetary movements for the next couple hundred years. I imagine they could create enough epicycles to explain all the historical data and yet they still don’t have a theory capable of producing accurate predictions for new data. The power of the human brain to tell really convincing stories about historical data is impressive. That’s why we rely on hypothesis testing so heavily.

  30. macgregor says:

    Remember: “All models are wrong, but some models are useful.”

    The American Pragmatists had it right. The correct criteria for “truth” is not “what is more elegant.” The correct criteria is “what is more useful.”

    Oftentimes, simpler, more elegant theories are more useful than convoluted ones, just because they are easier to use. That is why we have rules of thumb – they get the job done quickly and we can move on. Even aside from practicality benefits of simplicity, Machine learning clearly shows there is a sweet spot between simplicity and complexity in terms of predictive power.

    Of course, there is the question “useful to whom.” The Satan hypothesis is useful to persuade people to come to church. The evolutionary hypothesis is useful to archeologists who want to find more bones.

    Unprovable speculations might be useful as entertainment or emotional catharsis (triggering positive feelings of awe and mystery), but not useful beyond that. Of course the problem is when people who want to experience “emotional truths” are contradicted by those want to experience “material truths”, and they get into an unwinnable fight, since their criteria of “usefulness” is different.

    • koreindian says:

      I haven’t yet seen a very good (non-circular) answer from a pragmatist to the question “why are some explanations more useful than others?”.

  31. Controls Freak says:

    I think it’s more fruitful to point out that your brain is already using Bayesian methods to interpret the photons striking your eyes into this sentence, to make snap decisions about what sense the words are used in, and to integrate them into your model of the world.

    As a reminder, this probably isn’t true. Or at least, your prior should be rather small that it’s true, and we’re missing the appropriate empirical evidence to shift that prior.

  32. viVI_IViv says:

    The dinosaur hypothesis and the Satan hypothesis both fit the data, but the dinosaur hypothesis wins hands-down on simplicity. As Carroll predicts, most reasonable observers are able to converge on the same solution here, despite the philosophical complexity.

    I’d agree, yet we see even on this very forum people who seem to take Bostrom’s and Musk’s simulation hypothesis seriously, even though it’s essentially equivalent to the Satanic origin of dinosaur bones hypothesis. I conclude that most people, even the smart and educated ones, base their beliefs mostly on in-group/out-group markers than rational argument.

    Is the person who made this discovery doing Science? And should we consider their theory a useful contribution to physics?

    It would be an interesting observation to make, but it wouldn’t be yet a scientific theory. If they then came up with a geometric theory of quantum physics that predicted all observations with less arbitrary constants than the standard model does, then they would have discovered a scientific theory.

    in certain situations the math declares two contradictory answers – in the classic example, Schrodinger’s cat will be both alive and dead. But when we open the box, we see only a dead cat or an alive cat, not both. Multiverse opponents say “Some unknown force steps in at the last second and destroys one of the possibility branches”. Multiverse proponents say “No it doesn’t, both possibility branches happen exactly the way the math says, and we end up in one of them.”

    Not quite. First of all you are talking here about the many-worlds interpretation of quantum mechanics, which is only one of the many multiverse hypotheses (multimultiverse?) that have been proposed in physics.

    The most controversial one is arguably the cosmic landscape hypothesis of string theory, which poses that there are infinitely many causally-disconnected regions of the universe where the physical constants take all the possible values (from either infinitely many possibilities or some ridiculously large number like 10^500). This was introduced mainly because string theory failed to predict any physical constant, rather it is consistent with essentially infinitely many possible values, which led some proponents to pretty much throw the towel and claim that all these possibilities must be real, which led in turn critics to respond that string theory did not predict anything and was therefore a metaphysics rather than a scientific theory (ref the Smolin–Susskind debate).

    If you want to cast this in terms of simplicity, then string theory isn’t any simpler than the standard model, since in order to make measurable predictions you have to specify which one of the 10^500 – inf universes we are in, which would take as many bits, if not more, as it takes to specifies the constants in the standard model to any measurable precision, therefore all the extra mathematical complications of string theory: strings, branes, compactified dimensions, etc. don’t add any explanatory power.

    As for the many-worlds interpretation of quantum mechanics, it must still admit that something unusual happens in measurement events which causes the universal wavefunction to branch into non-interfering components. It is an attempt at reductionism, where the collapse observed in macroscopic degrees of freedom would be explained in terms of unitary evolution of quantum system with a large number of unobservable degrees of freedom, much like thermodynamics explains heat as the unobservable motion of a large number of particles. Unfortunately, it failed so far to derive the formulas for measurement from first principles, which calls into question the claim that it’s simpler than other interpretations.
    The Copenhagen doesn’t hypothesize any special unknown force, it is just a “Hypotheses non fingo” stance, which admits that we still don’t know what happens during measurement events.

  33. Watchman says:

    Whilst I appreciate the ideal of this post, there’s something that doesn’t work for me from a logical perspective here about the focus on simplicity. It might be a failure of comprehension, but here goes.

    At a certain level of explanation isn’t Satan put the bones there simpler than paleontology? There are multiple processes required to take the bones of a dinosaur or a plesiosaur (amazingly no-one has yet pointed out plesiosaurs are not dinosaurs…) and turn it into a fossil, whilst for a Satan-did-it argument there is effectively the only level of argument required: however many processes are required to explain the presence of fossils from a paleontological perspective is irrelevant since Satan just put the fossils there. The perceived simplicity of the paleontology model is only more simple so long as someone doesn’t ask ‘how does that work?’ Once that is asked, even if we just go down to the basic processes, paleontology is not simple, Satan is. Just because we see the Satan-did-it model as an unnecessary extra layer piggy backing on a believable scientific model, we shouldn’t make the mistake of assuming that adherents of this model will mimic the complexity of paleontology rather than just have a two-stage logic of Satan derived a plan; Satan put the bones there according to the plan, which is very simple.

    My concerns around simplicity probably hold less for the Sphinx, since the Atlantis theorem demands more complexity than a local ruler (identity tbc) using local labour (a process well-evidenced in Egyptian history). Yet the nature of the complexity for Atlantis: that we need to construct a civilisation totally lost and then rationalise its activities in an area with good historical records might be significant. Atlantis is an agent that does not exist. From a historical perspective it has no evidence for its existence, no negative evidence for it being there, nothing but an allegorical story in a non-historical work by a philosopher. It cannot be used as an agent to explain why something happened or exists because it is not capable of agency.

    A similar problem occurs with using Satan as an explanation for dinosaur bones. Satan cannot be attributed any physical agency: he acts through humanity if he acts at all. No ancient or medieval source shows Satan with the power to change God’s creation, only to mislead humans; his power is psychological not material (he may be able to influence the weather as well, but so could any human wizard). Satan is not a viable agent to explain physical phenomena because that is not the function of Satan in a non-dualist theological system such as the Abrahamic religions. He is not a scientific agency any more than the voices in your head.

    Science may require simplicity to determine the more likely reason for an observation in lieu of further evidence (history relies upon this reasoning…). But simplicity is not always correct, because pseudoscience is often based on very direct and simple thinking. Pseudoscience tends to rely on an agency that doesn’t exist, like cures like for homeopathy for example, which can be explained much easier than actual science. I think that agency might be a better defence than simplicity: can the agency causing the change be identified or inferred by observation (from a position of relative ignorance I’d suggest the multiverse can be inferred)? If so, it is possible. If not, go away and work on what the agency is because a hypothetical agency needs the hypothesis developing before it can be considered. Simplicity and Bayesian reasoning are good tools, but the precondition to their use is surely that the agency required to achieve the outcome must be an agent that exists.

    • sty_silver says:

      I think Satan is extremely complicated when you actually ask how it works.

      Eliezer Yudkowsky talked about this issue here.

      • smack says:

        Yeah, but what EY always ignores when talking about this is — simple according to whom? You can easily have a universal computer where “Satan” is one of the built-in primitives, and then Satan is very simple (the human mind seems to be such). This is no more or less arbitrary than any other choice of universal computer (which is one of the problems with universal-computer analyses of simplicity: those arbitrary constants can be hella big in practice — literally in this case).

        EY always seems to assume that the relevant universal computer is something much like an implementation of Python, or something, in which (indeed) Satan measures as very complicated. But that’s simply not a justifiable assumption. For a human-mind-like UC, Satan is much simpler than quantum electrodynamics.

        • sty_silver says:

          Yeah, but what EY always ignores when talking about this is — simple according to whom?

          Well, he gives an answer to that, as you point out, so he doesn’t “ignore” it. But it’s true that he doesn’t explain why python is a better standard than english. I guess he thinks it’s obvious.

          I would argue like this: programming languages specify a program unambiguously (up to platform dependence and parallelism and specific ambiguous constructs and such), since obviously you can compile and run them, and English doesn’t. Isn’t that a straight-forward demonstration that programming languages are the more fundamental metric?

          • smack says:

            No. Not at all. Or at least, not at all relevantly.

            It’s true that programming languages are precise in a way that English isn’t. But it’s also true that one could define a perfectly precise programming language that had “devil” or “witch” as a primitive, and it would be completely impossible to give any non-arbitrary argument for why it, or Python, was more fundamental in terms of defining Kolmogorov complexity.

            This is the problem with KC (though it’s beautiful and has some lovely uses). There is no way even in principle to distinguish a preferred universal computer, and the constants between one and the other can be arbitrarily large; there could be a language in which a program implementing the whole Bible took only one character, but which was otherwise identical to Python (modulo coming up with a replacement for that character, say). In the limit of infinite program length, they’d agree about what was complex. And that would tell us precisely nothing.

            We have an example of a computer that has human-like concepts as primitives: the human brain. True, it’s not as precise as a compiler, but no, that’s not importantly relevant (see above). This is just EY giving impressive sounding pseudo-arguments against things he doesn’t like using concepts that don’t actually support what he’s saying. (Of course, he’s right about witches; Kolmogorov complexity just doesn’t help show that he is.) He may think it’s “obvious” that commonly in-use computer languages are somehow preferred for defining KC (your quote, not his, I’ll point out for clarity), but it’s not only not obvious, it’s not correct.

          • sty_silver says:

            Would you agree that a formal Turing Machine is a fundamentally simpler concept than English or python?

          • smack says:

            I’m not sure what you mean. Both are universal computers (well, the Turing machine and Python, given infinite RAM). Either one can be used to define Kolmogorov complexity. So no, I don’t think I’d agree that one is simpler *in a relevant sense here.*

            That is, if you’re trying to say, “X model of a Turing machine is simpler than Y computer and is therefore better for defining KC for purposes of Occam’s razor,” then that is incorrect (even according to any standard development of KC). One could always, after all, make another Turing machine (tape, etc.) that had a one-symbol primitive for “witch,” but in which, say, “and” was a relatively complicated and wordy expression. That would seem odd and contrived — to us. But we’re working with particular hardware/software, too.

            Now, would I agree that the Turing machine is a simpler concept *to us,* i.e., has fewer moving parts to describe in our language? Sure. But that’s not relevant.

          • sty_silver says:

            That is, if you’re trying to say, “X model of a Turing machine is simpler than Y computer and is therefore better for defining KC for purposes of Occam’s razor,”

            Yes, that is what I’m trying to get at.

            So there’s the no free lunch theorem in machine learning, which says that, for every learner, there is a learning task on which it performs poorly. One needs prior knowledge of some kind to succeed with any task.

            I think you’re playing a similar game here. Let’s suppose you’re right that there is no way to formally say that a Turing machine is better suited to define KC than a language which uses Satan as a primitive. That’s the analog to the no free lunch theorem. For every problem, there is some programming language such that KC defined via that language is short.

            But then you’re arguing against Occam’s Razor altogether, and we know Occam’s Razor is useful. So why not use our prior knowledge about the real world to choose a Turing machine as the universal computer to define KC? Let’s bias ourselves towards a system that has performed well in the past.

          • smack says:

            No, I’m not arguing against Occam’s razor altogether. I’m arguing against using KC/Solomonoff Induction to define or support it, because it doesn’t work.

            It’s worked well in the past? Well, the Turing machine that’s exactly the same but which also has a one-symbol primitive for “Satan” has also worked just as well in the past. Which one do you plan to use? Why?
            (Note here that I’m talking about a *Turing machine* with a Satan primitive, not a language, in case that’s a prejudice you have. Turing machines also can have different designs.)

            There is no “canonical” Turing machine. There is going to be an arbitrary choice, and that’s true even among all the ones that have performed equally well up to this point. (See also: Green/Grue).

            My point is not that Occam’s razor is wrong. It’s that Eliezer’s arguments regarding it, based on KC, are facile and incorrect.

          • sty_silver says:

            It’s worked well in the past? Well, the Turing machine that’s exactly the same but which also has a one-symbol primitive for “Satan” has also worked just as well in the past.

            This argument is equally applicable to prior knowledge in machine learning. Say you know that for a certain curve-fitting problem, quadratic models have performed well in the past. Say you have an instance of 1000 points. We could fit a quadratic curve, since we know those have worked well in the past. But the class of all quadratic functions plus the one function that is the 1000 degree polynomial which fits the current instance perfectly has also worked well in the past (because it would never be selected).

            The way I view this current debate is that Occam’s Razor is an informal principle which performs well in practice, but relies on intuitive explanations to be justified. KC is a theoretical explanation which requires a lot less intuition (but still more than zero) to be justified. You’re pointing at that remaining intuition and conclude that the entire approach should be thrown away, and we should return to just Occam’s Razor and its much larger need for intuition.

          • smack says:

            First, an additional comment to your previous (not current) post, then I’ll respond to your most recent.

            You say, “Let’s bias ourselves toward a system that has performed well in the past.” Whatever else that might be, though, it can’t be a Turing machine. At no point has science actually progressed by formalizing descriptions via a Turing machine and then counting program length. In fact, given current knowledge, it would probably be much easier (though hard) to program, on a Turing machine, a “Satan did it” scenario than to simulate 4 billion years of evolution and come up with dinosaur bones. That’s not a reason for preferring the former; I’m just categorically rejecting your suggestion that “Let’s use what’s worked in the past” would ever get you to “Turing machine.”

            Indeed, the actual arguments that led to the preferring of the paleontological model took place among humans, using rich human concepts. Induction would thus strongly suggest continuing to refer to English rather than Python or Turing machines for these discussions (but surely all have their uses).

            Now on to your current post: To be clear, again, I’m not suggesting that we dispense with Occam’s razor. Rather, I’m saying that KC is not useful for formalizing Occam’s razor. What KC can get you (in this context) is a lot of fancy and impressive-sounding mathematical jargon to make you feel better about yourself; what it will then give you, at the end, in terms of content, is precisely whatever assumptions you put in. Nobody seriously arguing for the “Satan did it” position will *or should* view this as an impressive argument against their position. Moreover, it has nothing to do with the reasons that scientists did, on the whole, actually accept the paleontologists’ picture and not the creationists’ (which were much more complicated and subtle and so harder to put into a blog post, but which had the merit of not being provably invalid).

            Somebody listening to EY can just say, “Yeah, well, I just use a Turing machine with a Satan primitive (and other human-level concepts), and it’s also worked just as well in the past.” And now EY’s argument is without force against this person, and probably so is induction (since it *has* worked as well in the past). You now get to have an argument about which provably-arbitrary choice of Turing machine is less arbitrary. Enjoy!

            So to be clear, and at the risk of being repetitive: I’m not saying that KC needs less intuition, but still some, to be justified. I am saying that it is unjustifiable, because wrong (as an explanation/motivation for O’sR).

            I don’t quite grok what the force is meant to be of your machine learning analogy. Would you mind spelling it out more precisely for me, please? That is, I understand the content of your overfitting example where we look at the class of “quadratic plus this single 1000th-order” curve. What I don’t see is the argument turning that example into a problem for my position.

          • sty_silver says:

            I don’t quite grok what the force is meant to be of your machine learning analogy. Would you mind spelling it out more precisely for me, please? That is, I understand the content of your overfitting example where we look at the class of “quadratic plus this single 1000th-order” curve. What I don’t see is the argument turning that example into a problem for my position.

            I think this part of the discussion went something like

            You: KC doesn’t work because there is always some language in which Satan is simple

            Me: {draws analogy to No-Free-Lunch theorem}. So we have to bring our prior knowledge in, just as we do in ML, and the prior knowledge says that a Turing Machine is the way to go.

            You: Don’t agree with this because TM with Satan primitive would have worked just as well

            Me: By the same logic, the quadratic class isn’t an especially good candidate for the current problem (set of 1000 points), because the justification for the quadratic class is “it has worked well on similar problems in the past”, however, the class “quadratic functions plus the 1000 degree polynomial that has zero error on the current point set” has performed identically well in the past. So we might as well use that class for the current problem. And if we do, it outputs the 1000 degree function as the best match, since that one has 0 error.

            It seems to me that if your argument were valid, then the above would be valid, which it obviuously isn’t.

            In fact, given current knowledge, it would probably be much easier (though hard) to program, on a Turing machine, a “Satan did it” scenario than to simulate 4 billion years of evolution and come up with dinosaur bones.

            You’re smuggling runtime into the equation. The definition of KC is “the length of the shortest computer program that produces the object as output.” The TM merely needs to be able to simulate evolution. How long it would take is irrelevant.

            For the Satan machine, you actually have to describe exactly what Satan is and how he does his things. The runtime would be a fraction of the evolution TM, but the code of the machine, the state transition table, would be far larger.

            You say, “Let’s bias ourselves toward a system that has performed well in the past.” Whatever else that might be, though, it can’t be a Turing machine. At no point has science actually progressed by formalizing descriptions via a Turing machine and then counting program length.

            No, but the principle “privilege the hypothesis that would be easier to implement on a TM” has performed well. Nothing about KC implies that anyone actually has to implement the program. We know, roughly, how long a TM would be without writing it.

            Somebody listening to EY can just say, “Yeah, well, I just use a Turing machine with a Satan primitive (and other human-level concepts), and it’s also worked just as well in the past.” And now EY’s argument is without force against this person, and probably so is induction (since it *has* worked as well in the past). You now get to have an argument about which provably-arbitrary choice of Turing machine is less arbitrary. Enjoy!

            Yeah, my response to that is the ML analogy.

            So to be clear, and at the risk of being repetitive: I’m not saying that KC needs less intuition, but still some, to be justified. I am saying that it is unjustifiable, because wrong (as an explanation/motivation for O’sR).

            Right, noted.

          • smack says:

            OK, thank you for the clarification.

            To reply in reverse order: my point wasn’t about runtime, and I’m not smuggling that in. My point was about programmability. I don’t think that, in fact, we could write a program that would simulate the evolution of life on earth from the beginning through dinosaur bones at all, period, no matter how long it ran. I think there is far too much we don’t understand about programming models and even far more that we don’t understand about life and its specific history, initial conditions, etc. We could, just barely, write a simulation of the devil planting bones in the rock. Admittedly, maybe this one would be much longer than the other *if we could write the other,* but if you think so, then you don’t think so based on comparing actual programs, because the former does not exist and, at this time, could not exist.

            Moving up to machine learning: thank you for the explanation. I don’t see that as a problem for my argument. I agree that one should not include the 1000th-order polynomial. But the thing is, when one is writing an ML program, one is not trying to provide a theoretical underpinning for Occam’s razor; one is trying to teach a machine to learn. So it makes good sense to just *use* Occam’s razor and rule out 1000th-order polynomials on that basis. It does not, on the other hand, make good sense to do so in the KC setting where one is trying to give a bottom-up *justification* for Occam’s razor.

            An additional and more important distinction is this: second-order polynomials are an easily and non-arbitrarily distinguishable class of fitting functions, “the simplest” (or second- or third- simplest?). There is not an analogous class of easily, non-arbitrarily distinguishable simplest Turing machines. When I change from second-order polynomials to including 1000th-degree polynomials, I’m moving monotonically up the complexity ladder in a universally-definable way. But different Turing machines aren’t like that, or nobody has discovered how they are / there are good reasons to think they’re not (see the sub-discussion below and the papers I linked). But there are going to be machines with “Satan” primitives for which a “reasonable” TM would be quite hard to emulate (even if it’s not the one that I defined by just adding one more primitive!). The “proofs” that attempt to derive epistemic *reliability* from *simplicity* using KC (and thus give a KC motivation for Occam’s razor) notably do *not*, in any case, rely on any idea of simplicity of the TM (assuming, contra current state of the art, that such a thing could be defined).

            Now, if you want to just say that you’re using KC / SI to *express* Occam’s razor, and that your preferred form of the razor uses a human-standard Turing Machine, or IBM Basic, or whatever, that’s of course up to you, and at least it gives you a well-defined (though non-computable) sense of simplicity. But what it does *not* give you is any particular likelihood of being correct in your conclusions, or any arguments against the Satan-did-it-er.

            Incidentally and tangentially: another problem with adding only a single 1000th-degree polynomial to your learner is of course its extreme brittleness in the face of new data. Here the analogy fails. A good Satan implementation (sic) could be quite flexible.

            In sum, the analogy doesn’t trouble me.

          • sty_silver says:

            To reply in reverse order: my point wasn’t about runtime, and I’m not smuggling that in. My point was about programmability. I don’t think that, in fact, we could write a program that would simulate the evolution of life on earth from the beginning through dinosaur bones at all, period, no matter how long it ran. I think there is far too much we don’t understand about programming models and even far more that we don’t understand about life and its specific history, initial conditions, etc.

            Can we agree on this as a crux? As in, whoever is wrong on this point is wrong on the entire point? That way we could reduce the argument to a smaller domain. Regardless of your answer, certainly I would lose all my confidence in KC if you were correct that we could not confidently state that implementing evolution on a TM would be shorter than simulating the devil. (Note that we don’t need to be able to do it (though I think we can), we only need to be confdient that it would be shorter.)

            This point is particularly important because I think Occam’s Razor will give the same answers as KC based on a TM in 99% of all cases, and the remaining 1% are the ones where you’re incorrect about your intuition of simplicity and would probably change your mind if you understood the problem better. So from that PoV, arguing against KC but for Occam’s Razor is weird. But that all rests on the above.

          • smack says:

            I don’t think I can agree on this as a crux, because I’m not claiming that implementing the evolution of life would be longer than implementing the devil (sorry if it sounded like I was). I suspect that, on standard software, it might well be shorter. But I just don’t think we’re capable of doing it. That’s not the same as saying it can’t be done — maybe once we know much more it can — but I don’t think we know enough right now to do it.

            The only point of that claim was meant to be that whatever form of Occam’s razor we’re applying in this debate (devil-vs-paleontologists), it’s certainly not KC, because we don’t actually have the two programs, and one of them I don’t think we currently have the knowledge to write.

            To be clear: this is not a claim that the program is impossible in theory to write. It’s just a claim about our current state of knowledge about the relevant fields.

            So even if you agreed with this claim, it would not imply that KC(Devil) < KC(Paleo). Just that the latter cannot actually be bounded at this time.

            To your last point, about OR giving the same answer as KC with TM 99% of the time — again, *which* TM? Once more, I have no problem if you find it helpful to prime your intuition using KC and your favorite TM. We all need all the help we can get with intuition, goodness knows, and I’m sure I’d be interested to hear about the 1% case where it surprised you. My beef is with using KC as a *formal justification* for OR and then deploying it against specific alternative mechanisms, as EY does, when in fact the indeterminacy inherent in KC (which follows through any current formalization of OR using it) means that such an argument will always fail and leave the Devil-ist (or whoever) justifiably unimpressed.

          • sty_silver says:

            To your last point, about OR giving the same answer as KC with TM 99% of the time — again, *which* TM?

            The most primitive TM. Two symbols (stroke, not-stroke), a finite state set, a state/symbol -> state/symbol/movement transition function, an initial state, and a final state. On your second-latest post, you said that

            There is not an analogous class of easily, non-arbitrarily distinguishable simplest Turing machines.

            I’d argue the TM with just two symbols is objectively simpler than other TMs in the same way that the class of polynomials of degree 2 is simpler than a polynomial of degree 1000. So let’s use that one.

            The only point of that claim was meant to be that whatever form of Occam’s razor we’re applying in this debate (devil-vs-paleontologists), it’s certainly not KC, because we don’t actually have the two programs, and one of them I don’t think we currently have the knowledge to write.

            Yeah, I’m still arguing that claim. I think we can use KC.

            We don’t need to have the programs at hand. All we need is to be able to argue about the size of the smallest implementation.

            But I just don’t think we’re capable of doing it. That’s not the same as saying it can’t be done — maybe once we know much more it can — but I don’t think we know enough right now to do it.

            But we also don’t need to be able to write them. We only need to argue about their size.

            So do you claim we can’t argue about their size? Would that be a crux? (We can take the tougher claim of “estimating the size of the smallest implementation for any given problem is no harder than estimating its simplicity in other ways”, it doesn’t need to be just about evolution). Tangentially, I actually do think we could implement evolution right now. I think we could even literally produce a specification for the 2-symbol TM (because you could write it in another language and then compile it). Specifying the TM directly should also be possible, but it’d probably take longer than a lifetime.

          • smack says:

            Making it a two-symbol, finite-state TM doesn’t help anything. There still has to be some interpretation / internal implementation, and the upshot is that the Satan program can be arbitrarily short, and the “standard” TM that you want can be arbitrarily long to implement in such a thing.

            The fact that you can’t just specify around this is not something I’m making up, or a new problem with KC. It’s been there since the beginning and is a very oft-discussed fact in the literature, and I’ve linked (below) and referred to two papers about it. The second one discusses an explicit example of how a two-state TM can still be arbitrarily bad. The first one shows why KC / SI does not work well to *justify* Occam’s Razor.

            So restricting to a two-state machine doesn’t buy you anything. Maybe you’ll just decide to restrict yourself to a single, fixed machine, therefore.

            But the key point is that, no matter how much you restrict, and no matter how much you care which implementation you use, the “proof” that simple programs are more likely to be true does not, and works equally well for any TM (or universal computer), so it will not compel anybody to agree with your choice, and so will fail as a justification of O’sR.

            I believe I’ve made my point as well as I’m able at this time, and we’re kind of going around in circles. I hope it won’t be interpreted as rudeness if I therefore disengage here (I’m pretty busy this week) without further resolving anything. I do apologize for the ways in which I was unclear throughout the conversation.

            Oh, and to your last question — no, it wouldn’t be a crux. That whole sub-line was just a tangent, specifically a second argument against the position that induction would justify using KC to define/justify O’sR. But even if (contra my claim, which I stand by) we could program such a thing, it would continue to be correct that it has not been the practice of the scientific community to do so, and inductively, OR’s successes are not rightly attributable to KC.

            Maybe we can have further discussion in a future comment thread!

        • dionisos says:

          For a human-mind-like UC, Satan is much simpler than quantum electrodynamics.

          The idea of “Satan” as we intuitively understand it, which mean without any detail, and which mostly doesn’t explain any data, is simple for the human brain (but probably not that complex for python either).

          The idea of “Satan” with all the specifics of how it works, specifics which should in particular explain why quantum electrodynamics works as it works, seems hella complex for both human and python. (and more than quantum electrodynamics)

          Anyway I agree there is a difficulty with the primitives.

          • smack says:

            Yes, I agree that we under-specify “Satan.” But the point is that even if you fully specified it, and implemented it in say Python in your simulation of the universe — well, that would be an extremely complicated program, in Python, but there would be another language where it was primitive (as you agree). This is why KC is defined only up to arbitrary additive constants, and why it isn’t actually useful for addressing Occam’s razor.

            In particular, there is going to be a machine where every one of the (say) 30,000 most commonly used English nouns has a short descriptor. This cannot be non-arbitrarily ruled out as the correct arbiter of simplicity.

            Also relevant is this paper I ran across a few years back:

            https://core.ac.uk/download/pdf/157867152.pdf

          • dionisos says:

            I consider an “and” is much more “primitive” than “thermodynamic”.
            In general I consider it to me more primitive because it modifies less bits.

            Now why would I consider a Turing machine with more “primitive” primitives to be more legitimate than others ?
            I don’t have a answer I could derive from a pure abstraction, but :
            – If there is one I am interested to know it and I think it is a interesting question.
            – If there is none, but we could build another justified similar principle, I would choose this principle instead.
            – Else I would just consider it to be one of the meta-physical principles we need to say anything about the world. (in the same way we need some epistemological principles to think at all)

          • smack says:

            I agree it’s an interesting question. I’m skeptical that one could justify a principle from abstraction, and very skeptical one could argue that it should be correlated with truth or reliability (which should in some sense be a goal of Occam’s razor justifications, IMO). But I agree it would be interesting to think about.

            Here’s an interesting paper of some relevance (cited in the previous paper I linked):

            https://arxiv.org/abs/cs/0608095

  34. RomeoStevens says:

    I think Occam’s razor is mildly confusing to people because it can just push things up one level in the stack to how we’re counting things as simpler. I prefer to think of it in terms of modularity. Causal models that are side effect free can be stacked up together to build complex interventions on the environment. Naive simplicity is one form of modularity, but might be dispensed with for a more complicated but more cleanly modular model. We might also say that we’re pursuing global simplicity which necessitates escaping local minima. (I’m really just restating Quinean nautralism here). Naturalism concerns the ur-hypothesis about the unity of nature (neutral monism, etc.)
    It’s pretty easy to see how such a dynamic gives rise to Kuhnian dynamics. New theories propose new simplifications but haven’t yet proven that they can generate better APIs to all the things the old theory interfaced with.

    I also think delineating it as non empirical is a bit much. The output of an information processing system that I believe is causally entangled with reality is in fact evidence, and I can update over the history of this output. i.e. if Nikola Tesla tells me my circuit won’t work I’m likely to spend a lot more time at the drawing board rather than building it in the hopes of proving him wrong.

  35. ajfirecracker says:

    Multiverse theory, to my mind at least, requires a lot more weird stuff to be true than other interpretations of physics. There’s an entire universe which we can never interact with or observe, but which at least in principle we can predict every detail of down to the seemingly-random behavior of individual particles. Superposition (where a particle is not clearly in one state or the other) is a far simpler explanation to my mind, and one which admits of the potential for further experimentation to boot.

    • arbitraryvalue says:

      Multiverse theory isn’t an alternative to quantum superposition. In terms of Schrodinger’s cat, the universes branch when you open the box and look, not when the cat is or isn’t killed. (This analogy is of course a gross oversimplification. I’m not an expert.)

      • ajfirecracker says:

        Your comment highlights the strangeness of multiverse theory – not only can we predict the properties of unobservable universes, we can predictably create them! Far simpler to claim a particle is in an unusual state

        • Viliam says:

          not only can we predict the properties of unobservable universes, we can predictably create them

          Yeah, we can create”new universes” when we let particles interact with each other.

          Also, “new universes” are created all the time whenever particles interact naturally, so this ability is no big deal.

          On the other hand, in Copenhagen, whenever a human looks, “universes” are destroyed, which would have otherwise remained existing. This is a stronger magic.

          • ajfirecracker says:

            Extraordinary claims require extraordinary evidence. I think any claim about creating universes, destroying universes, or knowing about parallel universe without possibility of observing them requires extraordinary evidence.

  36. antilles says:

    TIL People still want to use Popper to do philosophy of science. I’m honestly surprised that this is the case because while I like Popper we all need to be a little more literate (read Larry Laudan specifically).

    The hypothetico-deductive model cannot be salvaged and there’s a simple reason why: when you perform a test, you are not just testing your hypothesis but a list of statements (assumptions about the world, about your equipment, about physics) that all stand or fall together to produce your prediction. So: my orbital prediction of the planet Mercury will be true if the planet has the mass I think it does AND the sun has the mass it does AND my telescope is working properly AND the laws of physics are what I think they are AND (probably some other things).

    If your test is falsified, what gets falsified is this conjunct, not some standalone hypothesis, and by DeMorgan’s law, the negation of an AND is an OR statement. So one of the original assumptions is false, but *we don’t have any deductive justification to pick one over the other.*

    Couldn’t you just conduct slightly different tests and superimpose them to figure out which assumption is false? No, because the list of technically possible background assumptions to test is infinite: once we run down all the plausible candidates like things about the laws of physics, we have to enumerate IMplausible candidates like the Devil not playing tricks on us. You might say we should dismiss those assumptions as being unimportant, and I agree, but do you have any “deductive* justification for your “importance” or “reasonability” criteria? (I’ll wait).

    Of course, you can use Bayesian statistics instead of deductive inferences to separate your candidates for which assumptions are false! But Bayesian statistics already makes room for induction when you are assigning prior probability weights, so Popper would already be mad at you. I’d argue this is just a way of quantifying inductive reasoning to make it a little more rigorous at tackling this falsification problem and I therefore give it the Larry Laudan seal of approval.

    TL;DR deductive reasoning is an important tool but if you try to build a model for science using only deductive reasoning it breaks in horrible ways and induction keeps trying to sneak into the process.

  37. The original Mr. X says:

    Could all this be solved by just ditching “SCIENCE!!!” in favor of more precise and granular terms describing different disciplines of knowledge-seeking?

    Personally I’d go back to the older use of “science” to mean simply any organised body of knowledge, but your suggestion works as well.

  38. eh says:

    Going back to Satan vs archaeology, what jumps out is that I can’t think of any reason why simplicity would be linked to truth, but I can easily understand how it’s linked to effectively modelling the world. What does it mean to “prefer” one model over another, if not to believe that one is simpler to apply with the same predictive power? The Satan theory is still equally predictive because it wholly encapsulates archaeology, and so using the Satan theory we can make all the same testable predictions, it’s just pointless to use.

    Given that multiverse theory does not predict anything, it’s still perfectly possible to choose the simplest and most elegant unpredictive theory using a heuristic built to find the best predictive model, but we’re not enhancing our predictive power by doing so. If you think that science is all about predictive power rather than truth, this feels a bit like saying it’s possible to determine which soap smells tastiest, and therefore soap is a food.

  39. Watchman says:

    Off topic, but has anyone ever got the Sphinx was built by Atlantis school of new historians to argue this with the new earth creationists? After all, the Atlantis in Egypt theory requires the earth to be older than the new earth creationists will allow…

    And frankly I don’t see why the empiricists should have to be involved in all the arguments. Can’t we let the pseudo scientists at least work out which contradictory theory holds up better before they expose the rest of us to them?

  40. smack says:

    I think most people — at least most physicists — who dislike the multiverse do so not only because it doesn’t make empirical predictions, but because they disagree that it’s actually more elegant. (Disclosure: I agree with them.) Peter Woit, for example, will argue persuasively that adding multiple universes does *not* yield any more elegant or pleasing organization of particles, except by extremely arbitrary choices in the space of multiverses. To use your (Scott’s) artificial example, it might be as if the particles were represented, instead, by the length, area, and volume of circles/spheres/balls of radii a, 5a/7, and 17a/13, and we were expected to believe that those numbers themselves were somehow natural. At some point, it’s not so clear that elegance is really winning the day.

    Another very forceful critique of Woit is that many such theories actually are not theories at all, and make *no* predictions, whatever. (Not even the predictions that the downstream theories like QFT do make.) This, so far as I can tell, is often or usually true. (See e.g. https://www.math.columbia.edu/~woit/wordpress/?p=9938 , among many.)

    Importantly, none of this denies or even dissents from Scott’s main point about evaluating theories; it just questions the hypothesis that that analysis applies to the multiverse.

  41. niohiki says:

    I’ll say this as someone who did his PhD in theoretical quantum mechanics, joined string theory, deeply believes both that string theory will not turn out to be a theory of reality and yet it is of immense scientific value, and considers that the many-worlds fancrowd is quite annoying and mostly motivated by the string-theory phenomenologists who so want string theory to be real they “solve” the landscape problem by “something something multiverse something anthropic principle”.

    Many-worlds quantum mechanics, in itself, should not be such a problem.

    For instance, @bugmaster says about the Sphere model

    The answer is empathically “yes”, because our two models are “the distribution of particle sizes is arbitrary”, vs. “the distribution of particle sizes follows a specific mathematical relationship”. This is something we can actually test.

    But that’s exactly the point! We can also actually test the predictions of many-worlds quantum mechanics, namely, exactly the predictions of (wavefunction-collapse agnostic) quantum mechanics, minus the mystery mechanisms required for wavefunction collapse. I mean, do we seriously want to say that Occam’s razor favors the Copenhagen interpretation where there’s some discontinuous jump between the universe and whatever an observer is, or magical faster-than-light-interactions arranged in just such a way that no causality violations can happen?

    This ties in with, for instance, @zluria

    To a total physics boor like my self, it seems totally obvious that the many-worlds interpretation makes far weirder assumptions than almost any other interpretation.

    It doesn’t! It really doesn’t! It says “listen, the experimentally-validated equations say that a bunch of different states evolve and separate from an initial state, so let’s just assume that we just riding one of those and the rest just go their own ways for as long as there’s no evidence to the contrary”. Just as if I see a ball of glass marbles crashing into the ground, lots of bits flying away from my view pretty fast, and despite never getting to find all of them again, I just assume that they ended up somewhere, sure, whatever. I do this by virtue of lacking any evidence that they have magically disappeared or were not real to begin with but rather some highly contrived optical effect (respectively, the equivalents of the Copenhagen or extra-hidden-interactions interpretations). We tend to say that we reject many-worlds brings unneeded complications to the theory, which should not be brought in until evidence requires it, but honestly it’s actually many-worlds that is saying “I’m not going to assume nothing more than quantum state evolution until someone brings me evidence of that something more”.

    And that’s the thing, you cannot opt-out of many-worlds by just saying “there’s no multiplicity of realities”, you have to take the baggage that comes with it – either QM is totally wrong, or you have to add some highly contrived mechanism such as observer-induced wavefunction collapse or yet unknown forces coordinating as to not violate locality.

    (Related SMBC, of course)

    Now it could be argued that “it’s just a math thing”, summarized quite well by @Faza in

    I’ve made this point elsewhere, but worth repeating here: if many worlds gives you simpler maths, by all means use the simpler maths. However, simple maths doesn’t obligate you to accept that there actually exist many worlds.

    What “exists”? Do electrons exist? In the sense of being perturbations in a fermionic quantum field permeating the universe? The map is indeed not the territory, but in that sense, when considering the most fundamental models of reality as “map” (and honestly, already Kant remarked that long ago), the territory is something that can never be known, or affirmed to exist.

    Many-worlds is the statement that you can pretty much explain observed phenomena by just requiring that there’s a space of quantum states as required by Schroedinger’s equation, we’re in one of them, and you don’t have to worry too much about the rest.

    It is true that when the most vocal pro-many-worlds people go on about it, there’s something more, and they don’t mean just that. It’s what can make them so annoying. It is not infrequently used to implicitly conclude philosophical positions, and sometimes ethical positions, in an unjustified way. In that sense, I agree with many criticisms.

    (Doing a little strawmanning: “many-worlds must be true, therefore I will actually take all possible actions, therefore I have no responsibility over them, and therefore no need for morality!”. No, sorry, you’re just a bad person.)

    But it is then also true that when the most vocal anti-many-worlds people go on about it, there’s something more, and they don’t do it with a strict objective of preserving scientific integrity. I am mostly hearing variations of:
    -I want a notion of “reality” that is familiar in the sense of classical physics, where you can touch and push and see things.
    -I really don’t like some attached philosophical/ethical positions many-worlders typically have.
    -The notion of having many parallel worlds seems abhorrent to me.

    These are perfectly valid personal opinions, but I think it does not help the actual physics problem. And caring as I do about the actual physics problem, I am yet to hear someone propose a currently testable, falsifiable mechanism that either explains wavefunction collapse, or explains the Bell-type experiments without requiring state superposition.

    Seriously, is the non-many-worlds position the one that needs to bring something falsifiable to the table.

    • smack says:

      “Seriously, is the non-many-worlds position the one that needs to bring something falsifiable to the table.”

      Oh, come now. Non-MWI QM makes the same predictions as MWI. You may think that what it proposes to do so (subjective interpretations, objective collapse theories, the pilot-wave “pointer” of Bohm) are inelegant, or falsified, or what have you, and that’s fine; but it manifestly is not the case that one needs MWI in order to make interpretations using QM. (People made interpretations just fine before Everett.)

      Indeed, as I referred to above, I think the strong *technical* problem with MWI is the absence of a good motivation for Born’s rule, i.e., predictions in QM. One seems inevitably to be left needing to just stipulate a measure (and that seems to me even less elegant and convincing than any of the major alternatives, but of course a problem with elegance is its subjectivity) or to retreat to decision theory or the like (which not only doesn’t work really well, but is just actually an admission that one is not going to try to make predictions and a withdrawal from an objective theory).

      I’m not trying to be over-harsh — I agree that MWI has some extremely appealing features, features that can even seem overwhelmingly convincing when one forgets the problems. But the problems do exist, and I think the exuberance you express is all too typical of “You’re all being unreasonable, there is NO reason for your benighted recalcitrance!” presentations of MWI.

      Incidentally, I agree with you on both points about string theory.

      • niohiki says:

        Ah, sorry if I came across as pro-many-worlds – I don’t really care to pick a side (“shut up and calculate!”), and I’ve also found myself arguing with convinced many-worlders because I don’t like the “but it’s so cool!” attitude either.

        My point is that many-worlds is a somewhat minimal setup once you accept that there’s a Hilbert space (not really minimal because as you mention the assignation of probabilities and so the “current” world is still ad hoc). I wanted to highlight that non-many-worlds is usually sold as the “intuitive”/”obvious”/”common sense”/”unadorned” explanation, both by the anti- (because they want to criticize many-worlds) and pro- (because they want to mystify many-worlds) camps. And yet rejecting many-worlds also requires extra mechanisms – in which one can believe and definitely merit research, but that are very far from simple or obvious. So I think that dismissing many-worlds from an Occam razor’s viewpoint is wrong, even if later we find that there were hidden variables all along and they end up being the simpler explanation.

        There’s definitely reason and resonsableness in the the criticisms of many-worlds side (I wanted to stress that when talking about many-worlders themselves). I’ve just found a certain kind of criticism (appeals to perceived common sense) also in many other discussions of physics. I do not find it to be epistemologically sound, and I generally feel the need to complain about it. But in all I think we agree more than we disagree.

        • smack says:

          Sorry. I clearly owe you an apology for misreading you. If you don’t even hold a position, it seems clear you were not being dismissive in asserting it! I would conclude I am getting too sensitive about this behavior (and I’ll try to be careful I also don’t act that way to actual MWI supporters).

          Yes, I see what you mean. I do think the need for a measure (or something) is a potentially pretty big spoilsport for the “MWI is minimal” picture, but I largely agree that most other (I might just say most, period) models of QM need some additional something, and it seems possibly fruitful to think about what the implications of those things should be.

          I also agree that Occam’s razor arguments against MWI are misguided (although I’m a little pessimistic about Occam’s razor settling *any* really serious scientific disagreement). I don’t really believe in MWI, but that’s for reasons not really related to Occam.

          I’m probably a bit more sympathetic than you to “It’s obviously weird and wrong” arguments, on the other hand. While I grant that the *mathematically* great simplicity it has deserves some weight, I also think that it’s naturally hard for us to think through all the actual implications of something so foreign and strange as an actual MWI, and the fact that it does, in fact, appear not to be able to produce the Born rule (and that this wasn’t noticed widely for some decades) sets alarm bells ringing for me. I guess what I’m trying to say is that I take very seriously the fact that we must not adopt theories that undermine common sense so thoroughly that the very common sense we use to conduct experiments in a lab and come up with the theories is no longer valid! And I think it’s very challenging to think through MWI carefully enough to be confident that it does not do so.

          That’s not the strongest criticism against it, IMO, but I also don’t think it’s an entirely invalid one.

          Incidentally, you wrote in your earlier post against the possibility of a “math only” MWI, i.e., accepting the MWI in math but not in fact. I understand that Wojciech Zurek is recently advocating doing exactly that. Have you read his thoughts on that, and do you have any thoughts in response? (I don’t entirely grok where he’s coming from, myself).

          • niohiki says:

            Uh, I am writing this very late, sorry! A good chunk of discussion got dragged into Cosmic Doom stuff, I couldn’t care too much, and forgot to reply here.

            I just wrote in the new post basically what would be my reply, with regards to Occam’s razor (and, well, indeed, writing that is what made me remember to reply here).

            The TL;DR is that I am a bit hopeful that the next level of fundamental physics will have as effective phenomena both Hilbert-space-QM (well, QFT) and Born’s rule. It would more or less follow the trend in physics, and it seems less weird to me than “suddenly, the Soul observes a measurement!” like Copenhagen requires. But sure, it depends on a) some more fundamental level existing and b) being accessible to us and c) actually reproducing Born’s rule. In any case, it’s for now only an expression of hope.

            About @Zurek, no, I haven’t, but I’ll take a look! Unless there’s something deeper in that position, though, I think that the distinction between models “describing reality only through math” and models “being reality” is metaphysical – or even semantic. I think every physicist agrees the model is not the reality, it just happens to describe it very well. The question -as physicists- is more “which model do we choose” (so we can all speak the same language) and not so much “what reality really is”. Kant, phenomena, noumena, etc.

    • eigenmoon says:

      Awesome, finally I found someone to ask this. I’m worried that some Cosmic Doom such as vacuum decay would kill us all with significant probability. But everybody says: relax, we’re still here after many billion years, so we must be doom-proof, nothing to worry about. However for an MWI believer this is no consolation at all! What if we’re just the survivors? Is there some way to calm down MWI enthusiasts such as myself about the Doom?

      • niohiki says:

        Well, I’m not really an “MWI believer”, so I’m not sure if I’m indeed the one to ask. What I am is, like Scott would put it, a Bayesian inference machine. So I’m definitely not going to worry more about Cosmic Doom than I do about getting run over by a bus when I cross the street listening to music (which I ashamedly admit to sometimes do), or I’d be exposing myself to a dutch book situation. I mean, where do we get that significant probability estimate from anyway? Do we happen to know the initial state of the universe, or the Ultimate description of Reality? This is exactly what I point out when talking about unjustified things being derived from the many-worlds interpretation. It does not imply anything about probabilities of this or that happening! (@smack remarks that quite well, actually.)

        (Normally the dutch book is used to say something about “put your money where your mouth is” and “revealed preferences”. Instead, I believe some people honestly worry more about Cosmic Doom or the Judeo-masonic conspiracy, than they do about car accidents, but that indeed leaves you open to exploitation by fearmongers who may stand to profit by e.g. selling books, organizing “seminars” and “tutorials”, or winning the elections.)

        • eigenmoon says:

          It does not imply anything about probabilities of this or that happening!
          MWI changes how you apply our existence as evidence. Consider the fact that LHC has failed to detect mini black holes (although they have tried). Without MWI, that’s simply evidence that no black holes were created. But with MWI the same fact might mean something entirely different, namely that nobody has survived in the worlds in which mini black holes were created. Now if I’d say that MWI gives evidence for mini black holes, you would be right to call it unjustified. But I think it’s an overshoot to say that MWI works exactly the same way as non-MWI here.

          Consider this with my comments in bold:

          In a study posted on the arXiv in March 2015, it was pointed out that the vacuum decay rate could be vastly increased in the vicinity of black holes, which would serve as a nucleation seed. According to this study a potentially catastrophic vacuum decay could be triggered any time by primordial black holes, should they exist. If particle collisions produce mini black holes then energetic collisions such as the ones produced in the Large Hadron Collider (LHC) could trigger such a vacuum decay event. However the authors say that this is not a reason to expect the universe to collapse [or is it?], because if such mini black holes can be created in collisions, they would also be created in the much more energetic collisions of cosmic radiation particles with planetary surfaces [and maybe that is indeed happening]. And if there are primordial mini black holes they should have triggered the vacuum decay long ago [and maybe they did, we’re just in the surviving quantum worlds]. Rather, they see their calculations as evidence that there must be something else preventing vacuum decay [with MWI that’s no evidence at all].

      • Bugmaster says:

        I don’t believe in MWI (nor do I disbelieve in it, I just kinda don’t care), but if I did, I might say something like this:

        There are infinitely many possible worlds, but chances are good that the vast majority of them do not contain Imminent Cosmic Doom ™. Now, it is entirely possible that you happen to exist in one of those extremely rare worlds that do contain the Doom, but it’s extremely unlikely, so why worry ?

        • eigenmoon says:

          chances are good that the vast majority of them do not contain Imminent Cosmic Doom
          How do we know that?

          • Bugmaster says:

            Because you currently exist in a world where the Cosmic Doom had failed to happen for billions of years, which (along with the fact that we rarely observe Cosmic Dooms happening to distant stars) implies that Cosmic Dooms are rare. Now, granted, there could be something super-special about you; but there’s no reason to assume that. So, if you assume that your current world is not privileged in some way, then most worlds probably do not contain the Dooms.

            If you want to think about it another way, let’s say that I want to play a game with you: you’ll flip a coin that I gave you, and if it comes up Heads, I give you a dollar; if it’s Tails, you give me a dollar. Being genre-savvy, you flip my coin 100 times just to test it, and it comes up Tails 99 times. You say, “nice try but there’s no way I’m playing this game with your trick coin”. But I reply, “that’s just because you got extremely lucky and ended up in the one world where it came up Tails 99 times out of 100. Don’t worry, the probability of the next flip is still 50%”. Do you play the game or not ?

          • eigenmoon says:

            @Bugmaster
            Because you currently exist in a world where the Cosmic Doom had failed to happen for billions of years
            That’s no evidence at all because in all the other worlds nobody exists anymore, so of course we would exist in the world where the Doom hasn’t happened yet. That tells us nothing about how rare our world is.

            we rarely observe Cosmic Dooms happening to distant stars
            A proper Cosmic Doom travels with the speed of light. Nobody observes it and lives.

            if you assume that your current world is not privileged in some way
            I’m not talking “hey, what if the probability of the Doom is more than one in a million?”. I’m asking how do you know that the Doom isn’t constantly obliterating us with probability 99.9999999% per second?

          • Bugmaster says:

            @eigenmoon:

            I’m not talking “hey, what if the probability of the Doom is more than one in a million?”. I’m asking how do you know that the Doom isn’t constantly obliterating us with probability 99.9999999% per second?

            It very well might; also, my trick coin could be completely fair in 99.9999999% of all worlds. Do you play the game, or not ?

          • eigenmoon says:

            @Bugmaster
            No, I don’t play the game. But I don’t see your point. The coin should kill me somehow if you want it to model the Doom.

      • Suppose you have two universes, one MWI, the other non-MWI. In both universes, an observer rolls a die and, if the result is anything else than six, paints his house red and stops rolling the die. If he rolls a six, he paints it blue and then rolls it again the next day. One hundred days later an observer is in a blue house. He concludes that the probability of an observer being in the blue house in the MWI is 1, and the probability that an observer being in a blue house in the non MWI universe is close to zero. Thus, anthropic reasoning implies he is in the MWI universe. But then suppose he is in a red house. He concludes that while the probability that an observer is in a red house in the non-MWI universe is close to one, the number of observers observing themselves in a red house in the MWI universe is 6^100-1. Thus, anthropic reasoning also implies he is in the MWI universe.

        Now suppose in a variation on this, God flips a coin, if it’s heads, he creates the MWI universe, if it’s tails, he creates the non-MWI universe. He then asks the inhabitant(s) of the house to guess the outcome of his initial coin flip on day 100. He gives them a deal: if you guess correctly that I flipped heads, I will give you 1$. If you guess correctly that I flipped tails, I will give you 10$. What would you choose? Does it matter what color your house is?

        Now suppose another variant: this time God doesn’t like the red house, and sends a tornado to destroy it and its inhabitant every time it is painted red. He then tells the inhabitant, if he exists, that he can stop rolling the die, that he would have been killed if he painted his house red, and then offers him the same deal. Should the choice differ?

        • eigenmoon says:

          This is very much not how MWI works. If an observer in the MWI universe throws a quantum die, he splits into 6 observers each of which has 1/6 moral worth of the original and 1/6 weight of the original for the purposes of the anthropic principle. If you don’t believe that, we can probably find some betting arrangement that would increase both our utility functions.

          • “This is very much not how MWI works. If an observer in the MWI universe throws a quantum die, he splits into 6 observers each of which has 1/6 moral worth of the original and 1/6 weight of the original for the purposes of the anthropic principle. ”

            That makes sense mathematically, but it doesn’t make sense if you are face to face with an observer who would have 1/6 of the weight of his “parent.”

            “If you don’t believe that, we can probably find some betting arrangement that would increase both our utility functions.”

            We could design a gambling game where we both come out ahead as we are both constantly increasing in number, but if it’s a zero sum game with no increasing/decreasing marginal utility then we can’t both gain over the baseline scenario of not having played the game.

          • eigenmoon says:

            it doesn’t make sense if you are face to face with an observer who would have 1/6 of the weight of his “parent.”
            You’re never face to face with somebody who has 1/6 of your weight. If you’re face to face with him, you’ve already entangled with him, so your weight is also 1/6 of what it was before he casted the die. But that’s kinda OK because the other five yous sum up with present you into your full weight as it was before casting the die.

            we can’t both gain over the baseline scenario of not having played the game
            But suppose that we have a quantum coin that gives tails 10% of the time. I value the 10% world at 10% of the original and 90% world at 90% of the original. If you value the 10% world at 100% of the original and 90% world also at 100% of the original, that means your betting odds are 1:1 and mine are 9:1. Do you see now how we can both make our average self richer, provided that you count averages your way and I count averages my way?

        • Now suppose yet another variant, this time instead of “God flipping a coin,” you assume one interpretation of quantum mechanics is objectively true. Suppose for a moment you survive with a red house in this scenario. On day 100, you are told that the best physicists think there is a fifty-fifty chance of either interpretation of quantum mechanics being true. But they are about to run an experiment which will answer the question with perfect certainty. You are offered 1$ if you correctly guess that MWI is true, and 10$ if you correctly guess that non-MWI is true. What do you guess? Does it matter what color your house is?

          Now put back the red paint killing you, with the reaction of red paint on the walls of your house triggering a collapse of the universe’s false vacuum state. You are in a blue house, are aware of the false vacuum and that you nearly destroyed the universe on each of the previous 100 days. You are offered 1$ if you correctly guess that MWI is true, and 10$ if you correctly guess that non-MWI is true. What do you guess?

          • eigenmoon says:

            I believe that neither situation offers any evidence whether MWI is true or not, so I would act according to my priors, which are close enough to 10:1 in favor of MWI that I have difficulty choosing.

            For calculating probabilities both “Whoops, my weight just became 1/6 of what it was!” and “Oh yeah, I’ve just survived a Russian Roulette with 5 cartridges in a revolver!” are kinda the same thing.

      • kaathewise says:

        Of course there is. MWI doesn’t make any predictions about the universe different from any other interpretation, so you can’t condition anything on it being “correct”.

        What it does though is that it gives way to a paradoxical line of thought like the one you bring up above (LHC black holes).

        There is no amount of knowledge in the world that would guarantee you from an unknown Cosmic Doom, MWI or not. And yes, as you correctly state, the fact that you have not observed something before cannot guarantee that you won’t observe it in the future, so it’s futile to make predictions simply based on that!

        Since it is impossible to construct a probability space that would account for unknown Cosmic Dooms, that’s not the way to approach the problem. Instead, we have to be satisfied with a probability space that would include just the Dooms that we can model, because we have zero information outside of it by definition.

        • dionisos says:

          Of course there is. MWI doesn’t make any predictions about the universe different from any other interpretation, so you can’t condition anything on it being “correct”.

          I think this isn’t right.
          MWI will explain the same past data (which is the only data we could have), but it predict something very different in the future : There will be a lot of “branches” in the future, and you will be either in “all” of them at the same time, or in neither. (depending of how you define the “self”)

          • kaathewise says:

            No, MWI doesn’t predict that your sense of “self” would be any different from what other interpretations predict. I don’t see if there is any other meaning of “self” (differing from what I am experiencing as “self”) that could be important

          • dionisos says:

            @kaathewike

            Not my sense of “self”, my “self” itself.

            You have a qbit in |0>+|1> state, you observe it, you end-up with |observer get 0 on the qbit>|0> + |observer get 1 on the qbit>|1>.

            What observer are you after the experiment ? You are either both at the same time (which I don’t think mean anything) or neither.

            Because you can’t say you will be either one or the other without adding something very strange to the theory. (and ending-up with something like Copenhagen but worst)

          • kaathewise says:

            @dionisos

            I don’t know what “self” is apart from an experience that we are accustomed to, i.e. with particular thoughts, feelings and memories.

            Although the state of the universe in MWI will represent a superposition of two observers with different histories, there won’t be any observer that actually experiences this superposition.

            Instead, MWI, as any other interpretation, predicts that the spectrum of experiences of the observers will still include only concrete histories of “self”, although they will be in superposition this will be completely undetectable for these observers, and so shouldn’t affect their sense of “self”.

            What does “self” mean if not a particular experience, and why should we care about it then?

          • dionisos says:

            @kaathewise

            Although the state of the universe in MWI will represent a superposition of two observers with different histories, there won’t be any observer that actually experiences this superposition.

            I agree with this, and it is important in the point I am trying to make.

            Instead, MWI, as any other interpretation, predicts that the spectrum of experiences of the observers will still include only concrete histories of “self”, although they will be in superposition this will be completely undetectable for these observers, and so shouldn’t affect their sense of “self”.

            I believe you are switching between two incompatible views (edit : I think I misread you here, but I let my response because I believe it help to understand what I mean) :
            Either you say you end-up with one observer in two independents states, and you can’t have a global “sense of self” for this observer given this independence : You have a observer with two completely independent “sense of self” (but then, in what sense this is one observer ?)

            Or before the measurement you have one observer, and after the measurement you have two independent observers (which is what I believe).
            And the observer before the experiment can’t be one or the other of these future observers, nor (if you accept a observer should have only one sense of self) both.
            The only remaining possibility is that the past observer aren’t any of these two future observers.
            IE : MWI is only compatible with empty individualism.

          • kaathewise says:

            @dionisos I don’t disagree with you, but I can’t see what you are suggesting that MWI predicts differently as you were saying above, that wouldn’t simply be a question of terminology.

          • dionisos says:

            @kaathewise

            Because I think people believe it predicts the same thing because they think : “what Copenhagen predicts I will observe next ? What MWI predicts I will observe next ? It should be the same thing”.

            But the question doesn’t have meaning in the MWI interpretation : There aren’t any “you”, who will “go” in one “branch” randomly and observe something particular.
            MWI truly doesn’t predict the same thing because it predicts all “branches” will be real.

            MWI only predicts that most observers should observe past data compatible with Copenhagen interpretation.
            But this is a prediction about the observers, not a prediction about what will happen next. Even if it happen to make the observers unable do differentiate between both interpretations with data alone.

          • kaathewise says:

            @dionisos

            But the question doesn’t have meaning in the MWI interpretation : There aren’t any “you”, who will “go” in one “branch” randomly and observe something particular.

            This doesn’t read as “MWI predicts something differently”, it reads as “Non-MWI answers question A with X, but MWI says it doesn’t care/understand question A”.

            MWI truly doesn’t predict the same thing because it predicts all “branches” will be real.

            This doesn’t read as “MWI predicts something differently”, it reads as “MWI answers question B with Y, but in non-MWI B doesn’t make sense at all”, i.e., “branches” and “real” certainly don’t mean the same thing outside MWI.

            To be able to say that MWI and Copenhagen predict something differently, you would need to state a question, that makes sense both in MWI and Copenhagen, to which they give different answers. Those answers should also be falsifable for them to be predictions.

            I don’t think such question exists.

          • dionisos says:

            To be able to say that MWI and Copenhagen predict something differently, you would need to state a question, that makes sense both in MWI and Copenhagen, to which they give different answers

            You measure a qbit in state |0>+|1>.
            MWI say the qbit stay in position |0>+|1>. (and the global state of the system, qbit/measuring device/observer, is in |0>|measuring 0>|seeing 0>+|1>…)
            Copenhagen say the qbit is now in state |0> or in state |1>. (just like the global state of the system)

            So the prediction is different.

            Those answers should also be falsifable for them to be predictions.

            I think it is false.
            There is nothing that would falsify the universe isn’t just the present state, and past and future don’t exist. (ie : there is no time)
            There is nothing that would falsify you are just a Boltzmann brain.
            Etc…
            It doesn’t mean these hypothesis predict the same things, in fact in the first hypothesis “prediction” didn’t even have meaning.

          • dionisos says:

            In fact I think the question come down to :
            – What does it means to do a prediction ?
            – What does it means to be falsifiable ?

            A prediction is what a hypothesis say will happen next.
            Being falsifiable is the fact that the hypothesis predicts a particular subset of possible observers. (if some observers happen to be in a state incompatible with what the hypothesis predicts, then the hypothesis is falsified for them).

            Some hypothesis do widely different prediction (if you are a Boltzmann brain you will die in the second). But they aren’t falsifiable because they didn’t predicts observers in a state incompatible with their predictions.

            (MWI is falsifiable because quantum mechanic is falsifiable. We are just speaking about the part where we want a falsifiable way to differentiate it from other interpretations)

          • kaathewise says:

            You measure a qbit in state |0>+|1>.
            MWI say the qbit stay in position |0>+|1>. (and the global state of the system, qbit/measuring device/observer, is in |0>|measuring 0>|seeing 0>+|1>…)
            Copenhagen say the qbit is now in state |0> or in state |1>. (just like the global state of the system)

            So the prediction is different.

            Right, but the quantum state is not visible to the observers, it’s just part of the model, not a prediction.

            Being a Boltzman brain is not falsifiable, I agree, we can only deal with things like that from the practicality point of view, not via pure reason.

            MWI is, indeed, falsifiable as any other interpretation as long as we use it to predict future experience of observers, but since they all predict the same future experience, you can’t falsifiably distinguish between them.

        • eigenmoon says:

          I basically agree with that.

          We can model all Cosmic Dooms with a parameter: the probability of surviving all Cosmic Dooms for a year. Then we can incorporate this parameter into Bayesian reasoning in the usual way, although we need to get priors somehow. That’s the tricky part.

          I’m happy enough to assign negligible (but non-zero) weight to all unknown Dooms at least for the sake of sanity. But this vacuum decay thing is worrysome, and although you’ve said that of course there is some consolation to MWI believers, I haven’t really seen one.

          • kaathewise says:

            But this vacuum decay thing is worrysome, and although you’ve said that of course there is some consolation to MWI believers, I haven’t really seen one.

            My only consolation is that Cosmic Dooms are exactly as scary outside MWI as they are per MWI. Non-MWI goes even better with your proposition, since you are using the word “probability”, which goes perfectly well with other interpretations.

            And, if you think about it, this problem is not really limited to quantum mechanics either.

            Should I worry that my cup of tea will turn into a giant snake tomorrow? I don’t know, my computing power is not enough to efficiently deal with all probable events that I can conceive and model already.

          • eigenmoon says:

            My only consolation is that Cosmic Dooms are exactly as scary outside MWI as they are per MWI.
            But I’ve started with saying why I don’t believe that. In non-MWI, the fact that our world survived for 13+ billion years is evidence that it’s sufficiently doom-resistant. But in MWI that tells us nothing. How then can Cosmic Doom be equally scary in MWI and non-MWI?

          • kaathewise says:

            @eigenmoon

            In non-MWI, the fact that our world survived for 13+ billion years is evidence that it’s sufficiently doom-resistant.

            I find this statement misleading. You can’t have evidence without a concrete model involving a particular probability space. If you do have a model, it should be translatable into MWI.

            If you don’t have a model, however, there is an infinite number of things you can’t guarantee yourself against. Such as “why doesn’t the world simply cease to exist tomorrow?”

            I don’t feel like simply saying “evidence” is sufficient to discard this question (even non-MWI), because whichever model you have, there exist a continuum of the imaginary ways for the world to end outside this model. So my reasoning is simply based on practicality, not “evidence”, which should translate into MWI.

            Maybe I am missing something and you could explain in more detail, what exactly you mean by the “evidence method” that works in non-MWI, but doesn’t translate into MWI?

          • eigenmoon says:

            @kaathewise

            You are right and I was wrong.

            I’ve just written down the model in detail and the Doom disappeared. Yes, it’s technically possible that the LHC produced black holes and everybody died in those worlds, but completely Doom-free possible Universes overwhelmingly outweigh those horrors.

            Thank you for your patience.

          • kaathewise says:

            @eigenmoon

            Brilliant, I am happy to hear that our discussion was fruitful!

      • dionisos says:

        However for an MWI believer this is no consolation at all! What if we’re just the survivors?

        What does it means to be “you” in the MWI interpretation exactly ?

        Either there is no real “you” persisting in time (what I currently believe), and you don’t really have to worry particularly about “your branch” being doomed in the future (because that make no sense).

        Or you are all the future “you” at the same time, and a big number of “your branches” will not be doomed in the future.

        I really think we have to abandon our naive understanding of the “self” in the MWI.

        • eigenmoon says:

          “You” at a particular time moment is the state of your software, the macroscopic state of your brain. “You” are existing at the same time in all worlds – physical worlds, computer simulations or whatever – which supply hardware in that exact state. “You” might branch by observing one thing in some worlds and another thing in other worlds – note that whether the worlds themselves split in the MWI sense is a different question.

          Whether you identify yourselves between different time moments is a matter of convenience. After all, each night our neurons rewire a little, so one could say that he’s a different man every day. I find it most convenient to say that “you” persist into all the future versions of “you” taken together.

          a big number of “your branches” will not be doomed in the future.
          The absolute number of branches is insanely huge, but what we really care about is the weight, or probability, of our branch. By probability I mean that if entry into a particular branch happens with 10% probability, then this branch has 10% weight of its parent, and moral value of everyone’s life in this branch is 10% of the same in its parent.

          All this is a bit trippy – for example, anybody from the past has enormous moral weight compared to us – but I don’t think it amounts to something as drastic as abandoning our naive understanding of the “self”.

          • dionisos says:

            I find it most convenient to say that “you” persist into all the future versions of “you” taken together. […]
            The absolute number of branches is insanely huge, but what we really care about is the weight, or probability, of our branch.

            If “you” persist into all the future versions of “you” taken together, then it isn’t the probability of what will happen to “your branch”, but what will actually happen to all “your branches”.
            If you are all the future versions of “you”, you are virtually immortal even if the vast majority of all branches are doomed

          • eigenmoon says:

            @dionisos
            isn’t the probability of what will happen to “your branch”, but what will actually happen to all “your branches”.

            I don’t see the difference. The sum total of the weights of all the branches born from this moment, including doomed ones, is equal to the weight of this moment.

            you are virtually immortal even if the vast majority of all branches are doomed
            This would be very useful if not for the Cosmic Doom. For example, I could get a lot of quantum random bits and interpret them as ASCII text, and hope that they contain instructions for achieving eternal life. The probability that I would indeed receive such instructions is only 2^-{huge number}, but the utility of this quantum branch is infinite, so all in all it’s infinitely worth doing that. But actually this idea fails completely due to non-zero Cosmic Doom.

          • dionisos says:

            I don’t see the difference. The sum total of the weights of all the branches born from this moment, including doomed ones, is equal to the weight of this moment.

            Sorry I misunderstood you about the weight/probability idea.

            In my point of view in this context we should call it a weight and not a probability, because a probability is only in the head and express our lack of knowledge. (where here we know all branches will actually exists with different weights).

            But given it isn’t a probability, we aren’t really calculating some expected value, and so I think weighting the value by the weight of the branches should be justified. (but it is a interesting idea)
            Also given it isn’t a probability, we can consider the sum of the weight to be more than one.

            for example, anybody from the past has enormous moral weight compared to us

            Yes, and this seems very strange to me.
            I think it makes more sense to consider the value I have now is equal to the value I had some second ago, implying than the sum of the value of the future branches are growing exponentially.

          • eigenmoon says:

            @dionisos
            sum of the weight to be more than one
            Since there’s no way to measure weight, there’s no absolute “one”, only the ratios between the weight of a branch and the weight of its subbranch, and those are basically probabilities, but if you don’t want to talk about probabilities I won’t.

            I think it makes more sense to consider the value I have now is equal to the value I had some second ago,
            I guess it’s a good idea to value almost-“you”s almost as much as “you”. If you do that, then the effect that “the sum of the value of the future branches are growing exponentially” disappears.

            There’s an online quantum randomness generator where everyone can rip our quantum branch into shreds by looking at a bunch of random quantum bits. I’ve done it once. Has my value dropped 2^1024 times because of that? (I think they give you 1024 bits at a time). Or has it inflated 2^1024 times?

            I like your principle: it makes more sense to consider the value I have now is equal to the value I had some second ago and I think it should apply even to the second during which I’ve got the 1024 quantum random bits, but “my” value at the end of this second has to take all my “siblings” into account. Maybe that’s what you’ve meant by abandoning our naive understanding of the “self”.

            I hope I forgot the random bits so I don’t branch you if you don’t want to branch. But note that everybody constantly interacts with random quantum bits because everybody’s genetic mutations are truly random, and everybody might behave differently when their genes are different, so “we” branch a lot by interacting with others. (Again, that’s not the same as MWI-branching, which happens a lot all the time).

      • ChrisA says:

        @eigenmoon – no need to worry. Let’s say there is a 99% chance the universe is being destroyed every second by some kind of vacuum decay. The 1% branch would still contain trillions on trillions of you.

        To my mind we will all know eventually if MW hypothesis is correct, when we have survived millions of improbably coincidences that should have killed us, and are still alive. This is the quantum immortality argument – there is always a branch where this current individual survives.

        Another point on the MW is too extravagant argument – it is pretty clear that our single universe is infinite. This means the MW hypothesis doesn’t actually change the number of entities extent. Infinite times a large number is still infinite.

        The final argument, from David Deutsch of course, is that of quantum computing (which he invented if I am not wrong). If MW is not correct where is the computing happening? In fact this is the killer thing for me, the leading theorist on MW came up with the idea of quantum computing as a result of his intuitions on MW. How about that for a practical demonstration of theory and practice? In comparison I don’t see any technology coming from our understanding of how the Copenhagen theory works.

        • eigenmoon says:

          I have been convinced not to worry about the Doom. However, quantum immortality is not it. Dying 99% every second is very much not the same as being immortal. In other words, quantum branches do not get a bonus utility for me existing in them that is not proportional to their weight.

          it is pretty clear that our single universe is infinite.
          That’s news to me.

          If MW is not correct where is the computing happening?
          There’s no difference between MWI and non-MWI until you poke a quantum object with something huge, and quantum computers do everything to avoid doing exactly that.

          But it would be interesting if a quantum computer could emulate human brain (or AI) without collapsing the wavefunction. How would its subjective probabilities depend on its wavefunction? I guess that they would obey the Born rule, in which case it would become clear to everybody that wavefunctions don’t collapse.

          • ChrisA says:

            These arguments all come from The Fabric of Reality by David Deutsch, he argues (as do very many) that we have measured the universe as flat from the cosmic radiation data from the big bang. From what I have read it there is still a small chance it is not flat due to the measurement error, so perhaps you are right to be skeptical. But even if it were not the case, why would there be only one big bang? That seems a bit pre-copernican. If the big bang can happen once, why not an infinity of times? I think our default should be therefore an infinity of universes, the alternative is a bit too weird for me – we are somehow in the middle of the only space time bubble to have ever existed? So infinity already exists for me, and MWI doesn’t change that if it happens or doesn’t.

            On quantum computing, of course Copenhagen interpretation allows quantum computing, it is defined to allow it, I was trying to make 2 different points though; 1st is David Deutsch’s point about where is the computing happening in the Copenhagen interpretation? When (if?) we build several hundred qubit computers, the equivalent calculations on a classical computer would exceed the classical calculation capacity of the known universe. Somehow this computing capacity disappears once it is observed though. The second point is that Deutsch invented quantum computers to demonstrate to people that the MWI was correct. So we have someone who deeply understands quantum theory, coming up with a practical application of this theory. I think this is a real demonstration of the validity of this thinking. It is sort of like someone saying they have a theory of aviation and then showing it by building a successful plane. Yes there are other potential aviation theories, like invisible angels holding it up, but they perhaps shouldn’t be described as the default anymore.

            On your last point the fact that under the Copenhagen theory we have to debate whether an AI brain observing an experiment could affect the result or not really shows the weakness of that theory IMHO.

    • Robert Jones says:

      listen, the experimentally-validated equations say that a bunch of different states evolve and separate from an initial state, so let’s just assume that we just riding one of those and the rest just go their own ways for as long as there’s no evidence to the contrary

      The experimentally-validated equations don’t say anything about states separating from an initial state. That really is the problem: why do we not observe the superposition?

      Also to be “just riding one of those” we need to have chosen a basis, but where has that come from?

      • eigenmoon says:

        This is really for the PhD to answer and not plain ol’ me, but Iemme give it a try anyway:

        why do we not observe the superposition?
        In order to observe a qubit in a superposition of 0 and 1, we entangle with it, becoming a superposition of ourselves who observed 0 and ourselves who observed 1.

        we need to have chosen a basis, but where has that come from?
        We pick a basis when we decide what we want to observe. It’s not enough just to entangle with the qubit, we need to tell it that we wish to see either 0 or 1.

        Let (x, y) describe the superposition of the qubit, with (1, 0) being pure zero bit and (0, 1) being pure one bit. Let’s ignore that x and y can be complex. The qubit’s state (assuming we’ve normalized it) can be anywhere on the circle x²+y²=1. Here’s when we have to pick a basis. We could pick (1,1)/sqrt(2) and (1,-1)/sqrt(2) if we really wanted to, but let’s pick (1, 0) and (0, 1) like everybody else.

        Now comes the weird part and nobody knows exactly how it works, but we entangle with the qubit and transform it and ourselves in such a way that what remains is only a superposition of the qubit in the (1, 0) state together with us having observed a zero bit, and the qubit in the (0, 1) state together with us having observed a one bit. The psi-function of us together with the qubit is now separated into two parts that evolve independently.

      • lightvector says:

        The experimentally-validated equations don’t say anything about states separating from an initial state. That really is the problem: why do we not observe the superposition?

        That’s actually an thing that I think even non-MWIers who understand quantum mechanics would agree MWI easily handles (it’s not the issue of disagreement).

        In order for interference to happen between two “branches” in a superposition as viewed from a given basis, so that there are any observable quantum effects at all between those two branches, the two branches need to evolve in such a way that there is some basis state that *both* branches contribute amplitude to. If the states have ‘separated’ enough via interactions with other particles, then the degree to which they do this will be vanishingly small, because *all* of the particles would need to match up.

        A crude/simplified way of thinking of it: in order to interfere and see quantum effects, both branches would have to evolve (have some amplitude for evolving) in such a way that the ENTIRE universe and the position/state of EVERY particle in it exactly matches up between the two branches!

        This is pretty easy for small and carefully physically isolated systems that only involve a few particles. If none of the rest of the universe ever ‘sees’ which state is which, only those few particles need to ‘re-cohere’ into matching states to get interference.

        It’s obviously impossible once things have leaked out beyond that. For example you yourself look at the state with a measuring device and see either a 0 or a 1, or you have a computer record the 0 or 1 – now millions of neurons in your brain with trillions of atoms in your brain are all firing and moving different depending on which ‘branch’ you’re in, or thousands of circuits in your computer are firing differently in the RAM of your computer depending on which ‘branch’, the heat given off by those circuits is different, etc. Now the ‘likelihood’ (i.e. amplitude) of those two branches now evolving in the future so that every particle in the universe exactly matches up again is basically 0. So they are basically independent now.

        Does that make sense?

        MWI says that both branches are “real” since this is the natural implication of the known physics, and people have been putting ever larger and larger collections of particles into superpositions and observing meaningful quantum effects as techniques from isolating those particles from the rest of the environment become better, with no known theoretical reason why there should be any limit to how much two branches can “diverge” and still both be shown to “exist” (due to actually seeing interference). The fact that you only see a 0 or a 1 is just indexical uncertainty as to which “you” you end up being, as there are effectively now two independent branches each with a “you” in them (except for the vanishingly small likelhood that the two branches do interfere again in the future due to somehow you and every particle in the universe all come back into perfect convergence).

        Copenhagen and collapse interpretations variously balk at the idea of having multiple branches if in practice one could never ever again observe interference between the two. Or they point out the problems with how exactly one can rigorously formulate the indexical uncertainty and why it should be weighted in the particular way that it is (this is what people are discussing when they talk about the “Born rule”). So they postulate that there is some “collapse” that occurs at some scale between than the tiny collections of particles people can test and human-sized (or perhaps it happens only purely metaphysically rather than at any specific size, depending on your philosophy), such that the branch that “you” probabalistically observe yourself in is the only “real” one and the other ones are pruned away or cease to exist.

        Hope that helps!

    • sovietKaleEatYou says:

      Thank you for this. It is frustrating that the question of QM interpretation is difficult to discuss without basic knowledge of the subject, and thank you for trying to provide a little more of the context: in particular that the Schroedinger equation is meaningless without many worlds.

      It is strange to me (as an outside to this particular field) that serious physicists go on anti-many worlds crusades. It seems to me similar to saying “ok, I believe in special relativity but general relativity? Come on! The universe can’t be that complicated”. Of course such people would be ok doing actual calculations: you can generally patch pieces of special relativity together and treat gravity as a field to get very close to the same results as GR. But few serious physicists say that GR is bunk.

      I’ve always wondered: are people against many world just misguided or is there something else going on?

  42. Alex Zavoluk says:

    Is the person who made this discovery doing Science? And should we consider their theory a useful contribution to physics?

    I think the answer is clearly yes. But consider what this commits us to. Suppose the scientist came up with their Extradimensional Sphere hypothesis after learning the masses of the relevant particles, and so it has not predicted anything. Suppose the extradimensional sphere is outside normal space, curled up into some dimension we can’t possibly access or test without a particle accelerator the size of the moon. Suppose there are no undiscovered particles in this set that can be tested to see if they also reflect sphere-related parameters. This theory is exactly the kind of postempirical, metaphysical construct that the Aeon article savages.

    But it’s really compelling. We have a hundred different particles, and this theory retrodicts the properties of each of them perfectly. And it’s so simple – just say the word “sphere” and the rest falls out naturally! You would have to be crazy not to think it was at least pretty plausible, or that the scientist who developed it had done some good work.

    I would say that this person is certainly doing math. I suspect that in practice, they would also be doing science, since it would be very unlikely that this insight does not allow for any further discoveries. It could be the case that the consequences of all the particles being related to spheres are already known, and just no one realized the connection to spheres. In that case, this person is only realizing the connections, which is “just” math.

    But in practice, I would be surprised if that were the case. Mathematical oddities have a habit of turning out to have meaningful physical consequences. See also: virtual particles and Noether’s theorem.

  43. Alex M says:

    Bad scientists always want to make excuses for why their hypotheses can only retrofit data instead of forecasting it. The solution is not to accommodate their excuses or debate reasonably with them: the solution is to understand that all of their excuses are nothing more than self-interested rationalizations for their own incompetence, and to punish them for it until they are no longer incentivized to make arguments attempting to justify incompetence.

    The fact that these scientists may legitimately believe their own rationalizations is irrelevant. Worthless people are very good at self-deluding themselves into believing themselves to have value. This not only benefits them (because their self-delusion allows them to better convince others of their own usefulness) but it allows them to face themselves in the mirror each morning without acknowledging to themselves that they are parasites who drain society of resources without providing any practical benefit in exchange. We see the same principle at play when we observe billionaires who argue for lower taxes on the wealthy because it “discourages economic growth.” This argument is bullshit and if they were completely honest with themselves, they would know it is bullshit. But the human mind is not designed for honesty, it is designed for self delusion. Mistake theory is a way for useless parasites (whether economic, political, or scientific) to avoid being called out for what they are because it asserts we need to be polite and assume that the other side is arguing in good faith, even though it is empirically provable that this is not the case. The truth is that the human mind subconsciously operates along self-interested conflict theory, and mistake theory is just a tissue thin rationalization that the conscious mind projects on top of its subconscious motivations. This layer of rationalization allows bad people to pretend that they are good people and thus avoid the punishment they deserve at the hands of the society they exploit.

    I see this happen all the time in all sorts of fields – in science, in politics, in environmentalism, in the business world – and I am tired of it. If you really want people to be completely honest with themselves (which is the only way you can expect them to be honest with others) then you need to incentivize them to be honest by punishing them whenever they engage in willful self-delusion. Once it is no longer profitable to live in a state of self-delusion, people will rapidly stop perpetuating this mindset, which will lead to great advances in science and technology. We won’t need to worry about major problems like the replication crisis when scientists are forced to be totally honest with both themselves and others. Right now people (including Scott) lie to themselves because that is the static equilibrium point of our incentive structure – it is advantageous for them to tell themselves these “little white lies” that actually have big negative outcomes for society. Once we make self-delusion and rationalization disadvantageous by providing very sharp disincentives to discourage it, people will very rapidly stop demonstrating this behavior since it is no longer a static equilibrium point. The new static equilibrium point of the system will have shifted to Radical Honesty.

    I am disappointed in this blog post by Scott. He claimed that he is trying to steelman the opposing sides argument before deconstructing it, but in fact he does exactly the opposite, presenting a weaker case than the reality than he argues against. Here is the harsh truth: a lot of modern day scientists are massively incompetent. Recognizing this reality would involve a significant paradigm shift, and that shift would involve more negative consequences for people who do shitty science. Obviously this is not in the shitty scientists self-interest, and since shitty scientists currently outnumber competent scientists, they are fighting this movement and rationalizing excuses for why the current methodology is sound. It’s as simple as that. Is that explanation elegant enough for you, Scott?

    • Plenty of scientists are incompetent and many are even frauds, and there needs to be more incentives to avoid stuff like p-hacking.

      I don’t see the above as contradicting Scott’s post.

  44. ajakaja says:

    Your physics example is accurate, in the sense that that is exactly the kind of reasoning used in physics so it captures it well.

    My resolution to this (as someone specifically interested in the interpretation of physics) is to reject the Popperian model of science entirely. More specifically, I think that middle school science fairs have done us all a huge disservice by teaching us that the steps of the ‘scientific method’ are “hypothesis, experiment, analysis, conclusion” of roughly equal importance. In my anti-Popperian view, this is totally wrong. I think science is, in equal parts:

    1. Ideation
    2. Experimentation

    First you build models, by collecting data, doing math, etc. Then you do experiments to verify them.

    If I could re-design middle-school science fairs, I think the left and middle of the triptych should be data collection followed by hypothesis generation, and the right column can be an experiment that you do afterwards.

    String theory, guessing that the particle masses are N-spheres, etc — these are just all parts of step (1).

  45. vashu1 says:

    One problem with estimation of complexity of theories(Khufu vs Atlantis) is that we do not actually perform a billion experiments and construct million of models. We just IMAGINED that we do this and got result based on our intuition. No wonder that people with different intuition give very different estimates.

    Take Many-World interpretation. It is very popular among the masses. Because to them it seems simple – just imagine lots of worlds and their interaction and you get quantum mechanics without faster than light paradoxes and stuff. But when you get to know a little more(stuff like weak measurement, etc) you start to understand that this theory would not be as simple when put into formulas, and maybe that is the reason that Many-Worlds is less popular among professionals.

    Likewise, we consider Atlantis very unlikely. But most of us has quite limited experience that restricts our intuition. Most of us live less than a hundred years and we are not very powerfull. What if we were a person that lived several centuries and had the power of a king for most of them. Person that actually saw how history is written and actual mechanisms of deep state. Would he set the Atlantis value as low as we do?

    Sorry about this conspiracy stuff but we did put non-zero value for Atlantis, didn’t we 🙂

    • The original Mr. X says:

      Take Many-World interpretation. It is very popular among the masses. Because to them it seems simple – just imagine lots of worlds and their interaction and you get quantum mechanics without faster than light paradoxes and stuff. But when you get to know a little more(stuff like weak measurement, etc) you start to understand that this theory would not be as simple when put into formulas, and maybe that is the reason that Many-Worlds is less popular among professionals.

      Is many-worlds actually popular among the masses? I’ll grant it’s popular amongst fiction writers, but I suspect that’s more because of its greater story-writing potential than because of any seeming simplicity.

    • Alex M says:

      I always thought of Atlantis as a metaphor. I mean, ancient people understood the world through a very different lens than we did, don’t you think? What a modern scientist would refer to as “a chemical attraction induced by pheremones” would be what an ancient scientist might have referred to as “witchcraft” and what a FUTURE scientist would refer to as “perfume.” So maybe instead of taking the stories at face value, one could ask themselves: what modern concepts would those stories be analogous to? In other words – if somebody were describing More Realistic Concept X to an ancient person, and they interpreted it through their superstitiously ignorant eyes as being “Atlantis,” what might that More Realistic Concept have been and how would modern people interpret that concept?

      I guess I’m a bit of a conspiracy theorist at heart also. 😉

      • Protagoras says:

        Atlantis isn’t an ordinary myth; people who take Atlantis seriously are much sillier than people who take mythology too literally. Atlantis first mentioned by Plato. If you read the original text in which Atlantis is mentioned, and you have any familiarity with Plato at all, I can’t see how you could possibly come to any conclusion other than that he made it up. Or, I suppose, instead of making up a BS story himself, he may have taken a BS story he heard and adapted it, but it’s definitely presented as a BS story (Critias, who tells the story, is not especially reliable, his explanation of where he got the story is very much the kind of explanation you’d expect from a teller of tall tales, and the story is carefully crafted to fit with Critias’ political views; it really isn’t subtle at all). I really don’t know how the tradition ever developed of people thinking Plato was describing history (a subject in which he never displays the slightest bit of interest in any of his writings).

  46. gwern says:

    Since hedgehogs are not able to predict the observed data, they are, by definition of Occam, not holding the simplest theory which predicts the data. They are merely holding a simplistic theory.

  47. LesHapablap says:

    All of this is covered in David Deutsch’s Beginning of Infinity. I’d like to request a book review please!

    I believe he would respond to the Satan vs. Paleontology question that the Satan theory can be used to explain anything. No matter what archeologists find: if they found 50,000 year old nuclear submarines buried under Stonehenge, “Satan did it” will work as an explanation. David is also a multi-verse proponent and discusses it in the book so you might find that interesting.

    • ChrisA says:

      Great suggestion on David Deutsch’s Beginning of Infinity, really good book that changed my views on many things.

  48. rahien.din says:

    Consider Solomonoff induction, as laid out in the Sequences’ chapter on Occam’s Razor : the complexity of a description is measured by the size of the smallest Turing machine necessary to simulate it. Yudkowsky describes that :

    To a human, Maxwell’s equations take much longer to explain than Thor. Humans don’t have a built-in vocabulary for calculus the way we have a built-in vocabulary for anger. … And yet it seems that there should be some sense in which Maxwell’s equations are simpler than a human brain, or Thor the thunder-agent. There is. It’s enormously easier (as it turns out) to write a computer program that simulates Maxwell’s equations, compared to a computer program that simulates an intelligent emotional mind like Thor.

    In order for Thor to be the cause of lightning, we still need Maxwell’s equations and climatology. But we also need descriptions of wrath itself, of personality in general, of Thor’s personality in specific, of the events adjacent to Thor that could have provoked his wrath, and of the methods he uses to make lightning – methods that must extend to the creation of entire storm systems. There is a tremendous amount of information compressed into “Thor’s” and “wrath.” The scientific description is far less compressed, and thus, [Maxwell’s equations] is a better password for lightning than [the wrath of a Norse deity].

    But consider the following series of true descriptions :
    1 : [audible grunt]
    2 : The object is heavy.
    3 : It takes a lot of effort to lift the piano.
    4 : One’s capacity for work is significantly taxed in attempting to overcome the gravity attracting the Steinway grand piano to earth.
    5 : A humans’ biochemical-mechanical ability to produce a reference frame shift is nearly counterbalanced by the piano-earth acceleration resulting from the curvature in space-time induced by the local mass system comprised of the earth and the 1995 Model L Steinway grand piano.

    Galaxy brain : [???]

    #5 has the greatest potential fidelity with a particular situation, and is probably the least compressed in the sense described above. It’s also pedantic to the point of uselessness. Occam’s Razor utterly rejects it in the context of moving an actual piano. In fact, the Turing machine required to simulate #4 (Newtonian gravity, the body as an actuator, specific piano type) is much smaller than the one required to simulate #5 (detailed depiction of human biology, relativistic description of gravity, extremely specific piano type).

    Certainly such descriptions as #5 are sometimes useful – there are many situations that call for high precision and complexity. However, the necessary degree of complexity is determined by context.

    Why? The real trouble with trying to guess the teacher’s password is that there isn’t a password to guess. Descriptions are useful abstractions/axiomatic sytems/maps, but not more than that, because the universe doesn’t have a password. There is no “Galaxy Brain” description – no description can ever be complete, and no description can ever be free of compression. No system of axioms can ever fully encapsulate reality.

    But “not fully encapsulate reality” is exactly the point. Key onto the word “useful.” It takes a finite and measurable amount of time to walk the coastline of an island, and this is what we care about. A map is only navigable because the cartographer left information out – a map is a particular kind of fiction. Or, “The role of a [map] was, in the broadest terms, to transpose a problem into another form. Depending on the nature and the direction of the problem, a solution might be suggested in the narrative.” A narrative that suggests a solution is also called “a game.” Maps are games.

    So instead of Turing machines simulating processes, we should think of Turing machines that encode games. The shortest Turing machine required to simulate the “can you move the piano” game is one that stops short of relativistic physics. There is an Occam Zone with razors positioned on each side, according to the game we want to play. The bottom razor slices off any description too compressed to direct action. The top razor slices away any descriptions too complex to permit easy action.

    Now we have a better form of Solomonoff induction : assign the highest probability to the smallest Turing machine that encodes the desired game.

    We can still reject [the wrath of a Norse deity] via this contextualized form, because it doesn’t help us play any viable games, such as “protect ourselves from lightning” or “enjoy a good thunderstorm” or “make weather projections.” But we might have a harder time rejecting [the wrath of a tall-tree-hating Norse deity]. In other words, what if we really try to get to know Thor, so to avoid his wrath?

    We might find that Thor despises all things tall and lonely, for this is where he flings lightning bolts. He hates the man standing on top of his house. He hates the lone tree on top of a hill. We might find that Thor seems less angry at places that are sheltered. We might find that Mjolnir is about a steinkast in diameter, and that rather than flinging it, Thor seems to roll it over the earth. We might find that Thor’s wrath seems to pass over the countryside in waves, and must always correspond to storm clouds. We might find that Thor’s anger is somewhat predictable, occurring more commonly on summer afternoons, and can be forecast at least several days ahead if we watch the weather carefully. We might wonder why Thor is so angry at Uganda, yet so pleased with Algeria. Ultimately we would find that Thor’s wrath precisely corresponds to the mechanisms of climate.

    We may conclude that Thor is seemingly as slave to the weather as everyone else – he plays the same game as us mortals. Thus, we do not reject [the wrath of a Norse deity] because it adds unnecessary information to the description of lightning. We reject [the wrath of a Norse deity] because it doesn’t add any new rules to lightning-associated games. (And that is so, even if Thor exists.)

    It might still be useful to describe a bad lightning storm as “angry” because that is an easy way to remember the rules of the “protect ourselves from lightning” game. It is a mnemonic device – a post-hoc encapsulation of the rules of a particular game. This is absolutely fine. We learn information in associative chunks. We need those mnemonic devices (or, a certain compression ratio) in order to act quickly. As long as you do not believe that the storm itself is animated by actual emotion, or you believe that the storm’s emotion would still be slave to physical mechanisms, you are fine. Similarly, “Just say the word “sphere” and the rest falls out naturally!” would be a good mnemonic device, but an actual Extradimensional Sphere would not be a good belief. The Extradimensional Sphere’s only access to the world would be through the particles’ sphere-properties, and so its existence would not add any rules to any game. The sphere would play the particles’ game, not the other way around. Or, going back to the sequences, the proper response to “A witch did it!” is “A witch did what, exactly?” That question drives ever toward the answer “A witch did exactly what physics told her and enabled her to do.”

    It is a dangerous mistake to endow mnemonic devices with mystical (or, unverifiable) significance.

    What we want to do is – science’s goal is – to play better and better games. “A witch did it!” and [the wrath of a Norse deity] and the Extradimensional Sphere are all mnemomic devices. They are aimed not at building a newer, better, more context-responsive Turing machine, but rather at minimizing the effort cost of already-built Turing machine. That’s why another description of a mnemonic device is “plausible retrodiction.”

    But all that does is keep us playing the same dumb game, only with less effort. Moreover, a mnemonic device can not withstand a new rule or new game, and so, it will resist new rules and new games. It becomes harder to think through a problem without the mnemonic device, and the whole reason it exists is because we want to expend less effort, so, it seems best to just not change the pattern of thinking. Thus is the unwary thinker trapped. So at worst, mnemonic devices are cognitive parasites.

    This is why plausible retrodiction is science’s adversary – why we have science in the first place. Not to create unassailable descriptions of reality, not to allow us to look backward at what we already know, but to keep us looking forward via the question “What else can we do?”

    …so, multiple universes do what, exactly? They do what the universe tells them to do. They play the same physical game by the same physical rules.

  49. For example, in the so-called Many-Worlds interpretation of quantum mechanics, there are universes containing our parallel selves, identical to us but for their different experiences of quantum physics. These theories are attractive to some few theoretical physicists and philosophers, but there is absolutely no empirical evidence for them. And, as it seems we can’t ever experience these other universes, there will never be any evidence for them.

    We’ll also never “experience” ancient Greece, we can still conclude that it existed. The problem with this is that experiments to confirm MWI have been proposed (https://en.wikipedia.org/wiki/Many-worlds_interpretation#Weak_coupling) and even if we don’t have any way to confirm it now, we might in the future. In 1800 you could claim that we could never know the chemical composition of distant stars, we could only “model” them as distant sources of light. Then we figured out spectroscopy.

  50. jvimal says:

    My test for evaluating hypotheses/models is borrowed from machine learning: You infer a model based on observations (training data), and it’s good/better if the once-trained model has good out-of-sample performance (i.e., data unseen during training). The trade-off between model complexity and out-of-sample performance is well understood. At that point, choosing a simpler/more complex model really depends on the application and accuracy that you need. For instance, Newton’s laws are “good enough” for many cases.

  51. cyanochlorous says:

    Complex numbers seem kabbalistically similar, somehow. I think most people, when they first learn about complex numbers, react with incredulity: “come on, you’re just going to invent a weird new magic type of number, with no obvious physical analogue, that squares to negative? and on top of that, you’re going to call them *imaginary* numbers? you’re not even *trying* to make this plausible”. But they turn out to have lots of convenient properties, so we keep them around. So it is with looking for other elegant theoretical contrivances.

  52. sclmlw says:

    When is it okay to say we don’t understand a thing? Do we have to stop saying we don’t understand something because a clever person invented a story that explains it really well? I thought the whole point of the scientific method was to help us determine – one-by-one – whether stories that explain observations have predictive merit.

    I think it’s fine to label something that merely describes previously-observed phenomena with the term philosophy, without having to call it science. For example, when Einstein first proposed special relativity there were no direct observations of the claims he made, and there was no obvious way to test those claims. It remained in a state of being great math/philosophy, until it was actually tested.

    As a counter-example to Einstein, the prevailing model of genetic transmission in the 1940’s proposed protein to as the vector for the genetic material. This made lots of sense, because it was more complex than other candidates and therefore more likely to be able to fulfill the information requirements. DNA was ignored by all but a few random DNA fans, because it was too simple. It seemed more like a structural molecule than an information-carrying one. The genetic-protein model was good philosophy, but it failed real-world experimental confirmation. Thus, we can work with philosophically-derived hypotheses to whittle away those that are not internally logically consistent, but that doesn’t replace hypothesis testing.

    I would certainly reject the notion that the lack of a better explanation should be considered when we determine whether the current explanation is viable. Few people thought DNA was a good candidate for the genetic material prior to the discovery of the structure of DNA. The best description of the snapshot state of knowledge at that time was simply, “we have philosophically-consistent models, but we don’t have experimental confidence in any available explanations.”

    In other words, no matter how good the model is, it’s okay to say “I don’t know yet”.

  53. AlexOfUrals says:

    As some noted above, I think the examples with dinosaurs and Sphinx are from a different category than multiverse.

    since Satan is trying to fool us into believing the modern paleontology paradigm, he’ll hide the fossils in ways that conform to its predictions

    Yes but it doesn’t make any predictions of itself. If Alice given all the correct answers on a test, and Bob thought “Alice’s answers are correct” and copied them, both of them has successfully predicted what correct answers are, but only one has a power to do so independently, while another just piggybacked on that power. If paleontology changes some of its predictions about fossils, the Old Deluder Satan theory will change “its” predictions to match, and if for some reason the Satan theory would decide to diverge, paleontology wouldn’t care. There’s a clear and experimentally testable causal link “paleontology predictions” => “creationism predictions” and under further examination it’s clear the second theory adds no predictions of its own and so has zero predictive power. The same applies to either of the pharaohs vs Atlantis, but not to distinguishing between the pharaohs. The latter I think can be adequately described in terms of arguing about the correct technical way to do inference from empirical observations.

    I still agree that the arguments about Occam’s Razor and elegance apply to the multiverse hypothesis, but I’m not sure it’s valid to draw from the persuasive power of the former 2 examples to support this case.

    Or even from the example with the sphere probably, if you think about it. (As stated, the Extradimensional Sphere is quite testable – there’s no physical law prohibiting particle accelerators size of a moon. But let’s say we can’t test the sphere directly period, it’s postulated in such a way as to be completely beyond our realm of existence.) The sphere postulates direct and clear casual chain between the observed masses of the particles, even if one of the links in this chain is entirely beyond our reach. What if, by better experiment design, we improved our estimates on the mass of one of the particles? The sphere theory predicts exactly how we should expect for other particle masses to change, when we can make similar improvements for their estimates. Ok, let’s say that we already know all the masses down to some indivisible “quantum of energy” (no idea if such a thing really exists) and it all ligns up precisely with the sphere theory and we knew it before we formulated the theory. Even then, the sphere theory predicts very low probability that we’ll ever find an error in any the previous measurements and if we do, we should expect very specific errors in the other 99 measurements. The default theory gives both much higher chance of finding an error (needs to be just 1 error instead of 100 coordinated ones) and finding one error doesn’t affect our estimates for other errors (except trivially: “ooops, turns out we’re not as good at experiments as we thought”).

    I don’t know whether the similar logic can be applied to the many worlds hypothesis. My obvious guess is that it can’t because otherwise of course somebody would’ve thought of it already. I suppose that it’s more like “the sphere has 10^100 properties and particles’ masses correspond to some of them but they can be anything and still correspond to some of them so whatever”. But if someone more knowledgeable in physics can provide more details on that I’d be extremely grateful.

    • theodidactus says:

      I know this is implicit in what a lot of people are saying but I think it’s not made explicit enough. Modern Paleontology differs from Ole’deluder satan theory because Modern Paleontology can confidently tell you what *can’t* happen. A lot of the conversation above regarding falsifiability and “useful” predictions is predicated on this. On an expedition to mars, modern paleontology can tell us that we won’t find fossilized ammonites up there, if we do, modern paleontology gets axed. Ole’deluder satan theory, as with alice and bob above, has to wait and “check” with modern paleontology first…and if we find fossilized ammonites on mars, ole’deluder satan theory can confidently assert that whaddya know, modern paleontology got this one wrong.

      I think a lot of people overlook how much modern technology relies on a simple confident assertion that stuff won’t or can’t happen.

      • AlexOfUrals says:

        Agreed, that too. And that applies to the sphere theory too – it says we can’t change mass of one particle without changing all the others. And I feel like the sphere theory feels compelling mostly because of that property, even if one doesn’t spell it out explicitly. An so saying “…and therefore multiverse should be compelling too” isn’t quite justified. Not to say I don’t think multiverse is more likely to be true – it’s just not that more likely.

  54. blacktrance says:

    A fourth bad response: “There is no empirical test that distinguishes the Satan hypothesis from the paleontology hypothesis, therefore the Satan hypothesis is inherently unfalsifiable and therefore pseudoscientific.” But this can’t be right. After all, there’s no empirical test that distinguishes the paleontology hypothesis from the Satan hypothesis! If we call one of them pseudoscience based on their inseparability, we have to call the other one pseudoscience too!

    Depends on what counts as an empirical test.

    First, if you could time-travel, you could examine the location of the fossil until you observe its formation, which doesn’t involve Satan. Though impossible, this would still be an empirical test that distinguishes between the two hypotheses, so they’re ideally falsifiable. You might not want to restrict falsification to actually-practical-falsification, because then unfalsifiable theories could become falsifiable through the invention of new scientific instruments, which seems wrong.

    Second, more practically, negative evidence is still evidence. Paleontology coheres with the rest of our knowledge about the world (which is supported by empirical evidence) in a way that the Satan hypothesis doesn’t. For example, if we saw Satan hiding stuff now, that’d be evidence in favor of the Satan hypothesis, so the absence of such observations is evidence against it. That’s why Bayesianism as a refinement of empiricism rather than an alternative.

  55. harshcurious says:

    Being more of a mathematician than a physicists (not a physicist at all), I have to say the many world theory smacks of a new math proof which is really just a new and maybe better way of writing down an existing proof. There is nothing wrong with it. It’s just not as interesting as a new proof.

  56. morris39 says:

    Please consider the simplest criteria for a theory or hypothesis i.e. what use is it? What am I (or society using informed self interest) willing to pay for this? Is there an opportunity cost or do we have unlimited resources? Lastly qui bono from the theory?

  57. Tadas says:

    What is missing is how the bit under discussion fits in with the other bits. Paleontology, taking the first example, fits well with theory of evolution, known age of the Earth, evidence of mass extinction, and many many other observations and theories. The devil fits with nothing but religion. If you follow that path, you end up with the Bible and no predictive power. It is forgotten both in this blog and in the quoted article that physics had many triumphs where a theory based on first principles turned out to be extremely predictive. Standard Model is one such example, by the way. I have a Ph.D. in high energy physics, so I know it better than many.
    The mirror/parallel universe idea predicts nothing new and doesn’t fit with anything else we know. So where is the value of it? What rankles most scientists about such approaches is that it is akin to saying “because it is so”. Physics was built on questions “how” and “why”. When a theory says “this is the final answer, which we can’t ever test, it makes no predictions, and it just is so”, then history is on your side to say “no, we have seen such dead ends before, and we will emerge out of it with yet another predictive theory, which will fit in with what we had before, yet be even better when the limits of the current theories are exceeded”. If you are willing to bet on the end of the history, that’s fine, then go with parallel universes and anthropocentric principle (for those who don’t know, it means that the masses of the particles and fundamental constants are what they are only because if they weren’t, we wouldn’t exist in such a universe; predictive power = 0).

  58. theodidactus says:

    I mean I don’t know much about physics but isn’t it more parsimonious to assume that Satan causes the waveform collapse?

  59. paragonal_ says:

    (Crossposted from LW)

    I don’t think that the QM example is like the others. Explaining this requires a bit of detail.

    From section V.:

    My understanding of the multiverse debate is that it works the same way. Scientists observe the behavior of particles, and find that a multiverse explains that behavior more simply and elegantly than not-a-multiverse.

    That’s not an accurate description of the state of affairs.

    In order to calculate correct predictions for experiments, you have to use the probabilistic Born rule (and the collapse postulate for sequential measurements). That these can be derived from the Many Worlds interpretation (MWI) is a conjecture which hasn’t been proved an a universally accepted way.

    So we have an interpretation which works but is considered unelegent by many and we have an interpretation which is simple and elegant but is only conjectured to work. Considering the nature of the problems with the proofs, it is questionable whether the MWI can retain its elegant simplicity if it is made to work (see below).

    One (doubtless exaggerated) way I’ve heard multiverse proponents explain their position is like this: in certain situations the math declares two contradictory answers – in the classic example, Schrodinger’s cat will be both alive and dead. But when we open the box, we see only a dead cat or an alive cat, not both. Multiverse opponents say “Some unknown force steps in at the last second and destroys one of the possibility branches”. Multiverse proponents say “No it doesn’t, both possibility branches happen exactly the way the math says, and we end up in one of them.”

    What I find interesting is that Copenhagen-style interpretations looked ugly to me at first but got more sensible the more I learned about them. With most other interpretations it is the reverse: initially, the looked very compelling but the intuitive pictures are often hard to make rigorous. For example, if you try to describe the branching process mathematically, it isn’t possible to say when exactly the branches are splitting or even that they are splitting in an unambiguous way at all. Without introducing something like the observer who sets a natural scale for when it is okay to approximate certain values by zero, it is very difficlt to way to speak of different worlds consistently. But then the simplicity of the MWI is greatly reduced and the difference to a Copenhagenish point of view is much more subtle.

    Generally, regarding the interpretation of QM, there are two camps: realists who take the wave function as a real physical object (Schrödinger, Bohm, Everett) and people who take the wavefunction as an object of knowledge (Bohr, Einstein, Heisenberg, Fuchs).

    If the multiverse opponent describes the situation involving “some unknown force” he is also in the realist camp and not a proponent of a Copenhagenish position. The most modern Copenhagenish position would be QBism which asserts “whenever I learn something new by means of a measurement, I update”. From this point of view, QM is a generalization of probability theory, the wavefunction (or probability amplitude) is the object of knowledge which replaces ordinary probabilities, and the collapse rule is a generalized form of Bayesian updating. That doesn’t seem less sensible to me than your description of the multiverse proponent. Of course, there’s also a bullet to bite here: the abandonment of a mathematical layer below the level of (generalized) probabilities.

    The important point is that this is not about which position is simpler than the other but about a deep divide in the philosophical underpinnings of science.

    Taking this exaggerated dumbed-down account as exactly right, this sounds about as hard as the dinosaurs-vs-Satan example, in terms of figuring out which is more Occam’s Razor compliant. I’m sure the reality is more nuanced, but I think it can be judged by the same process. Perhaps this is the kind of reasoning that only gets us to a 90% probability there is a multiverse, rather than a 99.999999% one. But I think determining that theories have 90% probability is a reasonable scientific thing to do.

    As per what I have written above I think that there’s a crucial difference between the examples of the fossils and the sphinx on the one hand and the interpretation of QM on the other hand. Which interpretation of QM one prefers is connected to one’s position on deep philosophical questions like “Is reductionism true?”, “Is Nature fundamentally mathematical?”, “What’s consiousness?”, etc. So the statement “[there’s a] 90% probability there is a multiverse” is connected to statements of the form “there’s a 90% probability that reductionism is true”. Whether such statements are meaningful seems much more questionable to me than in the case of your other examples.

  60. Imagine that tonight you go to sleep, you have a dream about theoretical physics. You could either dream about coming up with a better experimental method to deal with some of these debates or you could be given a better theoretical framework. You will not remember having made this choice. You’ll just wake up with the new idea in your head. So which option do you choose?

  61. Viliam says:

    The problem with Satan planting ancient fossils in ground is that it assumes two mechanisms how fossils can appear. Because I assume the creationists would be okay with the assumption that a hundred years old skeleton of a cow is actually a skeleton of a cow that died hundred years ago. So the paleontologists assume one natural mechanism how the skeletons get into the ground (animals died), while creationists assume two (animals died, Satan faked it). They assume one mechanism for relatively young skeletons, and another mechanism for relatively old ones.

    As another example, imagine someone who insists that things follow Newtonian physics. Now you can tell them about GPS and Mercury and whatever… but that will only make them admit that fast things move according to the theory of relativity, but they will insist that slow things move according to Newtonian physics, not just approximately but literally. What now?

    From your perspective, believing that everything moves according to relativity is the simpler explanation. From their perspective, using Newtonian physics for low speeds is simpler than using relativistic equations (even if they admit the necessity of using relativistic equations for high speeds). Especially if the relativistic equations give experimentally indistinguishable answers for the small speeds — from your perspective it is an argument for “relativity everywhere”; from their perspective it is “relativity for slow objects is just a metaphysical hypothesis that cannot be experimentaly proved; which is exactly why I reject it”.

    So your last desperate argument is: “Imagine an object going fast, that gradually slows down, until it stops. Does it mean that at certain moment it will switch from following relativity to following Newtonian mechanics? When exactly that happens, and why?” And you believe this is a devastating argument. But the other guy simply says: “Yes, that’s exactly what happens; I call it ‘collapse of relativity’. I can’t tell you when exactly that happens, but I am absolutely sure that when I am walking on the street, relativity does not apply to me.” You: “And how exactly would you have noticed if relativity applied to you walking on the street? The equations predict the same observations?” They miss the point and repeat: “Exactly, the same observations, which is why ‘relativity at slow speeds’ is merely a metaphysical concept.”

    (So, there are actually two sins. One is preferring two theories as simpler than one theory. The other is saying “X1 implies Y, which is why seeing Y is a strong evidence for X1”, when in fact X2 implies exactly the same Y. Both Newton and Einstein predict the same experience when you walk on the street, therefore… somehow… having that experience is an argument for Newton and against Einstein.)

    Back to the original topic… many-worlds hypotheses assume that the entire universe runs on quantum physics. Collapse hypotheses assume that tiny things run on quantum physics, but big things run on classical physics. No one knows where is the line between the “tiny” and “big” things, other that anything that was ever proved to be in a superposition obviously must be “tiny”, and humans must be “big”. Two sets of laws of physics, instead of one.

    Why? Is it because believing in quantum physics for big things would predict different experience than what we have? No, it would actually predict exactly the same experience. But somehow this becomes an argument against ‘quantum physics for big things’.

    We already know that the tiny things follow quantum physics. The only question is whether we need to keep a separate set of laws of physics for the big things. Well, the quantum physics and the classical physics predict the same outcomes for the big things, therefore…

    …and at this moment some people conclude “therefore it makes sense to only have one set of laws of physics”, while others conclude “therefore it makes sense to believe there are two sets of laws of physics, and things switch between them, and anyone who says there is only one set of laws is talking metaphysical nonsense.”

  62. mwacksen says:

    Your blog post is (as usual) well written but I’m not convinced by the arguments.

    Firstly – the predictions of the “Satan” theory for fossils match the predictions of the standard theory. I’m happy to be a naive Popperian and say that “science” cannot distinguish between the two theories. Science is a method of trying to figure out how to systematically predict things; I don’t see why science should (or can) tell us about the “truth” of certain statements. Occam’s razor may be a useful heuristic sometimes, but it’s dangerous to rely on it to establish “truth”.

    I have some examples that I hope act as a kind of counterpoint:

    a) The Rutherford model of the atom treats the nucleus as a central charge around which electrons orbit. It makes some very nice predictions, let us assume (I do not know if this is the case) that at some point it seemed to accurately predict all known properties of the atom. Someone living at the time would think the Rutherford model is “true” and describes how atoms work. If we believe Quantum mechanics (henceforth QM), this model is completely conceptually wrong, actually what is going on is that there is a wave-function that evolves via the Schrödinger-Equation that may or may not collapse giving something like the “particles” described by Rutherford. A pre-QM scientist trying to give a probability of “truth” to the Rutherford model would have given it a high probability, a post-QM scientist would give it a low probability. However, both pre- and post-QM observers reasoning about the predictive power of the Rutherford model would agree that the model has high predictive power in many situations even if it is not “true”. Both may still use the model to do science.

    b) A similar situation arises in Newtonian vs Relativistic mechanics.

    c) The probability density function of a particle undergoing Brownian-motion (a kind of random walk) evolves via the “heat equation”, which has this name because it also describes the evolution of temperature with time. This relationship is mathematically very fruitful as mathematicians switch between the points of view to reason about one object using methods applicable to the other. A smart physicist, realizing this, may postulate that actually temperature is just a virtual particle, randomly evolving by Brownian-motion so that each path of Brownian motion is its own multiverse. The concept of “heat” could be these virtual particles from different multiverses all interacting with real particles. This explanation may be elegant and have predictive power, but this doesn’t mean we should start reasoning about multiverses. The reason this example is relevant is that the Schrödinger Equation which governs QM is just the heat equation with imaginary (in the mathematical sense of complex numbers) dissipativity.

    d) As hinted in the previous post, mathematics is full of deep connections. For example, if you sum all reciprocals of squares, multiply by 6 and take the square root you get pi (see “Basel problem”) — this is though pi is first defined by quantities related to circles. Quantities like pi pop up in all kinds of places. In the same way, if the particles from your examples are all connected by numbers related to a sphere, this could just be the fact that spheres are mathematically elegant and therefore numbers related to them pop up whenever nice mathematics is being done. The real question, IMO, is why the laws of physics are sometimes so mathematically nice.

  63. Paul Brinkley says:

    My favored explanation for why the Satan theory is inferior to the paleontology theory is somewhat related to elegance and falsifiability.

    Both theories explain what you see, but the latter rules out more things. Paleontology says you won’t see an allosaur fossil in the middle of an ancient ocean. It says you won’t see a tyrannosaur in rock older than rock where you find a dimetrodon. It also says you won’t see rock formations that spell out the words “THERE IS NO GOD” in Old Enochian. It even says you won’t see rock formations that denote a myriad other ways they could’ve been arranged that would lead you to believe they arose through natural processes, that just happened to not include dinosaurs.

    This is equivalent to saying the Satan theory is worse because it proves too much. All sorts of things become equally plausible if we assume the existence of a being capable of orchestrating fossils.

    Or, maybe God created fossils in order to test our faith. Or, God created a universe in which fossils could arise after He left it alone, that we might spend time studying them, and thus appreciate the grandness of His creation.

  64. APXHARD.com (Mark Neyer) says:

    I took naive popperism to its logical conclusion and crashed into a brick wall, as detailed here.

    If you really think things need to be falsifiable, the idea of reality itself becomes suspect. After all, what evidence could you possibly give that there are fixed, unchanging rules governing how the world works, and we can discover them through logic? Why couldn’t the rules change tomorrow? There’s no empirical way to prove reality exists.

    • Faza (TCM) says:

      After all, what evidence could you possibly give that there are fixed, unchanging rules governing how the world works, and we can discover them through logic? Why couldn’t the rules change tomorrow? There’s no empirical way to prove reality exists.

      That sounds to me very much like exactly the problem Popperian falsificationism was meant to address: in a nutshell, you can’t.

      Popper tells us we don’t have to. It’s okay to be wrong. What’s not okay is having no way to find out that we are wrong.

      When you postulate the rules won’t change tomorrow, you are putting forth a falsifiable hypothesis. If they do change tomorrow, it will have become falsified. If they don’t change tomorrow, it won’t mean you hypothesis of immutable rules (yes, I am extending it here to the usual meaning) is true – it simply means that it hasn’t been shown to be false, yet.

      If your assumption of immutable rules has, so far, allowed you to make useful predictions (because it hasn’t been proven false), it makes sense to keep using it – right up until it becomes false and is no longer useful.

      The fact we can’t prove the existence of reality is a red herring. As long as our predictions are obviously better when we assume reality is… well… real, it doesn’t matter if it is really real, much less whether we can provide a rigorous proof of its reality.

      However, if you come visit me and choose to leave by the door, rather than the window (I live on the ninth floor), forgive me if I choose to treat your epistemological skepticism with less respect than it perhaps deserves.

  65. michelemottini says:

    `Some unknown force steps in at the last second and destroys one of the possibility branches` is the rule you have to use to make computations that match the experiments – it is not an idea of ‘anti-multiverse’ people. Then you can interpret that fact in different ways (hence ‘Copenhagen interpretation’ etc) – but it is a fact.

  66. ahartntkn says:

    I don’t think the Everett interpretation is worth defending in this way. The default, Bayesian assumption is that superpositions are intuitive phenomena like any other linear combination of things in any other field, and the probabilities of quantum mechanics are also intuitive phenomena, like probabilities in any other field. The Everett interpretation goes out of its way to assert that, unlike linear combinations in other fields, superpositions have an ontological nature independent of thought which, when applied to the whole universe, becomes a multiverse.

    There’s also the extra issue that “superposition of universe states = multiverse” is an ill-specified assertion. What, exactly, is that right side? I’ve never seen a clear explanation for what a multiverse is supposed to be, and it seems obvious that a universe with splitting timelines, however you might try to define that, is not equivalent to a linear combination of universe states.

    In fact, Sean Carroll just strawmans the problem. From here

    This is the quantum Bayesianism approach, or more generally “psi-epistemic” approaches. The idea is to simply deny that the quantum state represents anything about reality; it is merely a way of keeping track of the probability of future measurement outcomes. Is the particle spin-up, or spin-down, or both? Neither! There is no particle, there is no spoon, nor is there the state of the particle’s spin; there is only the probability of seeing the spin in different conditions once one performs a measurement.

    This quote illustrates that Carroll does not understand QBism (or, more likely, he’s stubbornly refusing to try understanding it). QBism, which doesn’t say anything essentially different from ordinary Bayesianism + naive mathematical intuitionism, does not deny that external reality exists, or that intuitions can represent things in reality. What it denies (and Caroll implicitly assumes this when he uses the word “represents”) is that intuitions are external to thought. When you count 2 sheep, the 2 is in your head, not in the sheep. This doesn’t mean the 2 doesn’t represent anything in external reality, it just does not possess an existence independent of the thoughts modeling said external reality.

    Among the mathematical physicists I follow (Urs Schreiber, for example), there’s a lot of effort put into making conceptually elegant, mathematically well-justified arguments for various aspects of physics. This article, for example, takes an approach along these lines to argue that (high-energy) supersymmetry is probably true by pointing out that particle physics without super-partners is mathematically weird. This is a much better example of what you’re talking about, and it does have real-world, if rather far-off, consequences on top of that.

    I’ve rarely seen interpretations of quantum mechanics discussed in those circles, but when I do, more often than not, it’s QBism that gets promoted rather than anything that makes extraneous ontological assumptions. Incidentally, it’s the only interpretation that even has a page on the nLab.

    • dionisos says:

      unlike linear combinations in other fields, superpositions have an ontological nature independent of thought which

      The linear combinations are in the mind, but not the things they represent/map to, in a lot of cases.

      There’s also the extra issue that “superposition of universe states = multiverse” is an ill-specified assertion. What, exactly, is that right side? I’ve never seen a clear explanation for what a multiverse is supposed to be

      I think a multiverse is just a universe, but with many time what we intuitively conceptualize as our material world (in the naive common day way).
      But I don’t think this word is really important here.

      When you count 2 sheep, the 2 is in your head, not in the sheep. This doesn’t mean the 2 doesn’t represent anything in external reality, it just does not possess an existence independent of the thoughts modeling said external reality.

      I think “a superposition of wave functions” is like “2 sheep” here, not like “2”.

      • ahartntkn says:

        I think “a superposition of wave functions” is like “2 sheep” here, not like “2”.

        I did not intend to compare the two phrases. My point was that mathematical concepts are, first and foremost, intuitive ones, without existence outside the mind. The concept of “sheep” is also in the mind; that it intends to model something, first and foremost, outside the mind doesn’t change this. What this does give us is an expectation that the concept of sheep doesn’t have any extraneous details which don’t refer to particular parts of the IRL sheep. Quantum states, Hilbert spaces, etc., being intuitive, purely mathematical structures first and applied to the world after, shouldn’t give us such an expectation. Quantum states are made of matrices of complex numbers, for instance, though one would not assert separate existence to those numbers just because they are being used as part of something trying to refer to the real world. The Everett interpretation does assert separate existence for linear combinations of states in particular, however.

  67. dionisos says:

    It is a little off topic, but there is something which really bother about the MWI.
    If it is correct and I understand it correctly, it means all the worst things are happening all the times in crazy quantities. It seems like the worst news by far of all human history.

    Am I missing something ?

    • sharper13 says:

      Perhaps that many more good things are also happening all the time in crazy quantities?

      • dionisos says:

        Yes, basically like unsong but without the garden keeper.

        But this is hardly reassuring, the unsong stuffs already seemed grim.

  68. argentus says:

    I think there’s a healthy bit of space between “Hey, we are pursuing this multiverse thing because all the other ideas are much worse” and “this idea must be true because it’s mathematically pretty and I find the idea of the real world being mathematically ugly disappointing.”

    And some of the multi-verse people really do appear to have attitudes approaching that last statement. Lost in Math by Sabine Hossenfelder was a good read on the topic.

  69. Joe says:

    “No it doesn’t, both possibility branches happen exactly the way the math says, and we end up in one of them.”.

    Does this mean math is the “Unknown Force” that forces us into one particular multiverse? Perhaps the math reveals two possible potentialities but only one gets randomly actualized. We see things go from potential to actual all the time. Like when a tadpole develops into a frog. I’m not sure why a multiverse is nessecary.

  70. Seth B says:

    There are an infinity of curves that fit through any set of data points. And some of our best sciences notoriously miss some data points.

    Saying that accuracy can be the only test of a theory is very bad science.

  71. Steve Sailer says:

    My vague impression is that the multiverse hypothesis started to grow in popularity after various astrophysicists pointed out that our universe seems rather fine-tuned for intelligent life to evolve. Positing a multiverse is an alternative to intelligent design creationism. On the other hand, from an Occam’s Razor point of view, assuming a multiverse might seem less parsimonious than assuming one universe and one Creator.

    Anyway, all these concepts seem rather theological.

  72. anton says:

    I am not a physicist, but a mathematician.
    To my understanding there are two things called “multiverse” which don’t have anything to do with each other.
    One comes from string theory stuff, which is an attempt to a theory of quantum gravity (along other stuff). The mathematics of a string theory model that (are supposed) to not contradict observation are a terrible mess, they are not “simple” by any means. I have no issue with non-empirical speculation when in the business of building new theories, the criticism I have heard here is that this theory has been in the works since the 1960’s with no endpoint to the speculative phase in sight. At some point you have to ask when an idea has failed.
    The other thing that may be referred by “multiverse” is the “many world interpretation” of quantum mechanics. If I understand correctly, the mathematics of all the various interpretations is the same, so the question is one of terminology. I think it was Hilbert that once said that the theorems of Euclidean geometry are equally valid if you call the “points”, “lines” and “planes”; “chairs”, “tables” and “mugs” respectively. It is hard for me to be excited about such a labeling exercise. Of course, this is not as pointless as it looks, by a suitable suggestive choice of terminology a better understanding of the underlying mathematics or a different mathematic model may be worked out. As far as I know there is no such pudding produced from these endeavours so far.

    • deciusbrutus says:

      The problem with the claim that it’s mostly a terminology issue is that the math describes observations, and “universe” and “probability” were already words before QM. It would be dishonest to intentionally use those words to refer to things that don’t exemplify the characteristics that are associated with those words.

      Likewise, the math would stay the same if we renamed quarks, and their names are pretty much arbitrary but if we named them “Strange, normal, anti, photon, Steve, dark”, some of those names would be objectively bad for communication with the humans that exist.

      • anton says:

        I do not think it is dishonest at all. In fact in my experience it is extremely pedestrian. When people have to name an abstract mathematical object they very often do not make an entirely new word, but take a name from something already existing, often not very carefully. This just means that the terms mean different things in different contexts, if necessary one can remove ambiguity saying something to the effect of “in the sense of …”. If the name is chosen properly it might even be useful to reason about (at the very least, by analogy), even if only some, and not all, of the formal properties of both objects coincide.

        • deciusbrutus says:

          I see. The Howard the Duck interpretation: “When I say a ward, it meens precisely what I intend it to: Nothing more and nothing more.”

          Note that I repurpised many, many words there, there are no errrors although there might be a few cases where I didn’t know which word I was using until my fingers found the correct keys.

  73. A1987dM says:

    FWIW, the hypothesis of neutron-mirror neutron oscillation has f***-all to do with the Everett interpretation¹, and the reasons why it’s not likely it’ll be proven or ruled out soon are practical, not theoretical, so complaining about them is kind-of like complaining about the unobservability of gravitational waves in the 1920s.

    ¹Except insofar as it involves quantum superpositions — but the same kind of superposition as proven-beyond-any-reasonable-doubt neutrino oscillations; so if your favourite interpretation says it’s impossible, so much worse for your favourite interpretation.

  74. Reasoner says:

    Whenever I think about philosophy of science I’m struck by all the parallels to machine learning. To the point where I wonder if it’s fair to describe machine learning as “applied philosophy of science”. In ML terms, Aeon’s argument could be stated as “you need cross validation”. And Scott’s reply could be stated as “not if you regularize your model enough and you know that highly regularized models on datasets like the one you’re working on tend not to overfit”.

    BTW, Sabine Hossenfelder has a blog post where she argues that many-worlds doesn’t actually solve the problem with the Copenhagen interpretation: http://backreaction.blogspot.com/2019/09/the-trouble-with-many-worlds.html I’m not able to follow her argument, but I will say she gives me strong “has a clue” vibes in general.

    Anyway, if we ever get prediction markets for realz, that will probably be a second scientific revolution in terms of our ability to reason and make predictions about things.

  75. antilles says:

    Scott, I realize it’s in your bailiwick, but it may have been somewhat unhelpful to choose several examples of history disciplines here (meaning only “things concerning the past” like archaeology and paleontology). Most philosophers of science tend to get a little weird about history because we are not free to conduct tests and observe results in quite the same way as in, say, physics or chemistry. Which is not to say those problems aren’t interesting and worth solving, or make those disciplines unscientific, but just that they’re kind of distracting (as attested to by the fact that so many comments focus on those aspects of the examples rather than your core point).

  76. Deej says:

    What if we’re simply missing a bit of the math and/or empirical observation and if we new it we’d know if the cat was alive or dead?

    No multiverse, no last minute force. More plausible than both?

  77. sovietKaleEatYou says:

    Perhaps it’s worth writing this in a separate comment. It looks to me like the Aeon article hopelessly conflates scientific debates and pseudoscientific theories, in a way that would be forgivable for a journalist if it weren’t so sanctimonious and self-important. I don’t think it’s worthy of what is usually a pretty decent publication.

    I want to specifically single out the quantum multiverse theory, which is something that I am not an expert in but know a little about (academic here). I think it is misunderstood in popular culture and despite Scott Aaronson’s best efforst, many of the standard misconceptions are shared by the SSC commentariat, including the other Scott. I apologize preemptively for any mistakes.

    Two postulates that need to be repeated often: the quantum multiverse theory is the default theory, and is no(t much) weirder than ordinary probability.

    Let’s unpack this. What do I mean by “the default theory”? It’s certainly not the end-all theory (in particular, it is non-relativistic). But in any currently full quantum-mechanical model involving the Schroedinger equation, including the model explaining the recent Google quantum computer experiment (which outperformed classical computers by dozens of orders of magnitude in a certain well-defined computational task) the multiverse is part of the math. Namely, in order to get a probability computation for a certain event happening at a specific time, it is insufficient to know the “state of the universe” at a previous time. Rather, one needs to know simultaneously the states of all universes up to that time, including causally unrelated ones, to make an accurate prediction. As far as we know, any naive attempt to “average out” or ignore causally unrelated universes will give you strictly less accurate predictions than a model that takes them into account, and this is provably the case if you believe that quantum computers are asymptotically faster than classical ones (a standard postulate in complexity theory).

    Luckily, there is a strong exponential decay factor that tells you that the only universes that contribute in a non-negligeable way are “extremely close” universes (with the “half-life” for this decay controlled by the planck constant h-bar). This means that a classical, purely deterministic reality is a very good approximation for the quantum reality, and the Copenhagen interpretation, which just keeps track of a couple of entanglements (read: parallel universe pairs) at a time is good to even more ridiculous precision unless you’re doing something purposefully sneaky like building a quantum computer.

    This is, once again, not an exotic theory. It is the standard model of non-relativistic quantum mechanics, with other “theories” simply computational techniques that give good (approximate) numbers out of this standard model. Declaring the other interpretations to be primary hurts the explanatory power of the theory just like declaring pi to be equal to exactly 3.14 would hurt mathematics (this thing with pi, by the way, has been done historically for example by the Greeks, because irrational numbers are weird).

    Once again while the Schroedinger equation, cum multiverse, is the standard theory and is very elegant in its completeness and its ability to correctly predict reality, it may, and almost certainly will, be superceded by a better theory (again, it doesn’t take into account relativity, and has other theoretical problems). The full theory may well be deterministic and independent of the multiverse, or dependent on something even weirder: we just don’t know.

    Now the second postulate is that the quantum multiverse theory is not much weirder than ordinary probability. When you are mathematically defining a random process that takes one of two actions every second, your mathematical model must, from the point of view of pure logic, simultaneously postulate 2^N possibilities at once in order to talk about the evolution of N steps of this process. Of course doing this in a real-world computation is absurd: our computers have very good pseudorandom number generators, and can simulate a random processes in just N steps. We can get a good idea of the probability of some event happening by doing this simulation 1000 times (rather than 2^1000 or something) and seeing how many times the event occurs in our sample. But once again, the pseudorandom process is a sort of hack to find out something about a multiverse of 2^N options.

    Now suppose that you are an organism living inside a universe that is deterministic except that one atom moves in a random way, left with probability 1/2 and right with probability 1/2, every nanosecond. Of course by the butterfly effect, this small randomness quickly percolates to large-scale randomness. This means that if you are an organism living in this universe, even if you have perfect knowledge of the state of the universe right now, the future suddenly becomes a multiverse: logically, you must admit the possible existence of every possible 2^N different future universes after N nanoseconds of randomness. All of these must figure in your ethical computations, all are inhabited by sentient beings, etc., etc. This would seem mind-blowing if humans weren’t so used to it. Even if we were to discover that the laws of physics are deterministic, from our point of view (with limited knowledge), the universe admits quite a bit of randomness (or is indistinguishable from one that does). This means that every calculation we do involving predicting the future (which is most calculations) postulates (or is equivalent to postulating) a future multiverse: many futures, all inhabited by copies of ourselves, etc.

    The many-worlds interpretation of quantum mechanics, which once again is equivalent to the theoretical interpretation of non-relativistic quantum mechanics, tells us that quantum randomness is a little bit weirder than classical randomness: not only do you care about the multiverse of possible futures, but there is also some leakage into your random process from casually unrelated past universes.

    Remarkably, when viewed from a certain perspective, this is a very small difference. Here I’m out of my depth, but it is my understanding that if you squint a bit, you can interpret the difference between quantum and classical randomness by taking a certain matrix of real positive numbers and saying “now the numbers are allowed to be complex”. This matrix controls an analogue of probability theory, so you can (again, squinting) interpret the quantum universe as basically a random universe with complex probabilities. Here, just like with ordinary randomness, actually modeling the time-N evolution from a pure-logic perspective requires 2^N steps, but systems like this can be modeled much faster on a computer, provided the computer is a quantum one. In particular, quantum computers do not let you sort nicely through all 2^N multiverses: all currently useful quantum programs amount to certain random processes with exotic probability which allow you to a couple of cool new mathematical tricks (one of which permits factoring large numbers in polynomial time).

    That’s the end of my rant. But I want to add a general observation, which is that humans seem compelled to conflate weirdness and philosophical depth. Our universe does not make sense from an ethical perspective. Morals are complicated, souls are un-observable, causality and ethics are hard to reconcile. But for some reason instead of accepting that our moral understanding is a highly context-dependent work in progress largely independent of its theoretical underpinnings, we pretend to have a philosophical fit every time science gives us a new piece of information that we consider weird. “The earth goes around the sun?? Oh no! But what about God?!” “Humans are descended from apes?? Oh no! But what about human exceptionalism?!” “The universe as we know it will die in some trillions of years?? Oh no! But what about the ultimate meaning?!”

    I don’t want to say this is necessarily a bad thing. To be sure, sometimes science does pose important philosophical questions. But the quantum multiverse simply is not such a case. It’s just another way in which our current best model of certain aspects of reality is a little weirder than previous models.

    • sovietKaleEatYou says:

      I should add that the Schroedinger picture is not the unique “full” theory of nonrelativistic quantum mechanics: others exist (such as Feynmann’s path integral formalism). They are now understood to be equivalent to the Schroedinger picture (maybe a souped-up version of it). They are also, as far as I understand, all formulated in terms of something like a multiverse of states.

    • deciusbrutus says:

      What does ‘causal’ mean in that sense?

      Because one of the things I read was that if you ignore the states of “causally unrelated universes” you cannot make precise predictions about events; I assume that necessarily means that if you made false assumptions about the states of those “causally unrelated universes”, you would be able to make precise but inaccurate predictions about the same events. I read that as implying that the state of those “causally unrelated universes” is Bayesian Evidence of the event being predicted. (That is, that P(A|B) != P(A|~B), where B is some statement about the states of universes causally unrelated to event A.)

      I feel like that is literally explicitly Cargo Cult thinking-believing that something causally unrelated to an event impacts that event. But literal cargo cults at least have a veneer of doubt regarding whether the things actually are causally unrelated, while Many Worlds appears to use the exact terminology to describe the opposite.

      Because I don’t think modern physicists are actually that bad at epistemology, I instead conclude that they are actually that bad at communication, and that what I read by “casually unrelated universe” is not what they wrote using those words.

      If there’s some weird definition of ‘causal’ in the math somewhere, then the problem is that ‘causal’ already had a meaning and the mathematical definition is different from it.

      • sovietKaleEatYou says:

        Right! “Causality” here is in the sense of branching of universes, which is a useful fiction sort of like the notion of “frame of reference”. In GR, two particles in the same reference frame can move relative to each other absent a force, which means that there is no such thing as a reference frame in the sense of Newton, but locally it is a useful approximation.

        In quantum mechanics there is no causality: you just have a wave function on the space of all universes that also depends on time in a way that satisfies a first-order differential equation. However there is a useful fiction saying that there is a “high degree of flow” from point x (corresponding to a past universe) at time t to point y at time t’ (corresponding to a present universe), in which case you say that universe x at time t is causally related to universe y at time t’.

        Having a notion of “being at rest” (or more generally, being in a fixed reference frame) is only an approximation of the physical reality but it’s a crucial concept for organisms within the reality, so you can talk about it while understanding that it is to an extent a fiction. Same with causality.

        If you start with a random or a highly symmetric wavefunction at time 0 and at time t see with high amplitude universes with many copies of a certain configuration of atoms, it is possible that this configuration of atoms corresponds to replicating life. From its point of view, it is trying to self-replicate successfully, which involves predicting the future while making a minimal number of calculations. Thus it makes sense of it to think of itself as classical, and to ignore the fact that copies of it are leaking out into other diverging universes all the time: this would not help increase the amplitude of universes with copies of itself. Once it gets more sophisticated, it might realize that some of these failures of causality can be experimented with and exploited, and build a quantum computer. This is a behavior that you would expect to see if you take an infinitely fast (classical) supercomputer, run a sufficiently complicated Schroedinger equation for some amount of time, then pick at random a world-state corresponding to high amplitude. Of course the quantum systems we can model are not sufficiently complex to evolve replicating life, but this picture of “quantum experiments are probability flows from one worldstate to anothe worldstate” has been borne out in 100 percent of the quantum experiments we have performed.

        • deciusbrutus says:

          Oh. I see now.

          In addition to “Causal” not being a thing in QM, “Universe” clearly isn’t a thing, and I guess “unrelated” isn’t a thing.

          I feel like it would reduce misunderstanding if they used “distant on the 6th dimension” (Or whatever dimension ‘universe’ measures) rather than “causally unrelated universe”.

          Also, if you can consistently get notable acceleration between two particles at rest without force, I can help you with the engineering required to use that to replace all fossil fuels and other energy sources.

  78. bagel says:

    Scott, I think you did a disservice to Popper.

    A third bad response: “Satan is supernatural and science is not allowed to consider supernatural explanations.” Fine then, replace Satan with an alien.

    While the Skeptics – capital-S – as a movement disallowed the aesthetically supernatural, there is a more meaningful test that applies here, and you get at it shortly after that:

    A fourth bad response: “There is no empirical test that distinguishes the Satan hypothesis from the paleontology hypothesis, therefore the Satan hypothesis is inherently unfalsifiable and therefore pseudoscientific.”

    A Popperian – naive or otherwise – would reject the Satan hypothesis on its own merits, regardless of the other theories in play. The Satan hypothesis molds itself to each new piece of data, and so is inherently unfalsifiable on its own merits, not just in comparison to dinosaurs. If you want a scientific Satan hypothesis – or scientific alien hypothesis – you need to have a theory of what Satan or the aliens cannot do.

    Or, to say it using Information Theory; any theory that doesn’t answer some question by discriminating between what will and won’t happen adds no information, and is therefore not meaningful.

    The fact that the forensic sciences can’t run new experiments – which is what I’d argue puts them one rung below the empirical sciences – is in this case a secondary concern. Because they’re binding themselves to falsifiable predictions they’re still a science, and their theories deserve consideration. Those theories are more fragile than the empirically testable ones; if asked to wager your life on how surprising of the next piece of evidence about nuclear physics or dinosaur age is you should always pick nuclear, but they’re still not magical thinking.

    • dionisos says:

      The Satan hypothesis molds itself to each new piece of data

      The hypothesis here isn’t just “Satan”, it is “Satan + How Satan try to mislead us” (explained in a way that end-up predicting the same things than the other hypothesis)

      This is falsifiable, and it predict stuffs. (but you can’t discriminate between the two hypothesis only with experiments)

      • bagel says:

        This is falsifiable, and it predict stuffs. (but you can’t discriminate between the two hypothesis only with experiments)

        At each frozen moment, absolutely the claim could be specific for someone binding themselves only to falsifiable arguments. But the “Satan did it” argument also contains the claim that Satan is so good that we can never catch him in the act; any apparent counterexample is just Satan being tricksy. And as each argument gets falsified, it gets papered over with that defense.

        The warmth of the Earth is a great example of this moving of the goalposts; before radiation was understood, it was hard to imagine an age of the Earth substantially different from the Biblical account. Even Lord Kelvin infamously said that a young Earth had to be the case, using the best science of his day. So the Young Earth folks said “look, the Earth must be young, that’s predicted by the Bible and contradicted by science, so clearly the Bible is right”. But when our understanding of radioactive heating, geology, and Earth’s core’s composition increased such that an old Earth seemed eminently possible, instead of declaring that theory defeated the Young Earth folks just declare that Satan had messed with the heat to give the appearance of age.

        There is no piece of evidence that the “Satan did it” theory cannot do that to, which is why it’s inadmissible as a scientific theory.

        • deciusbrutus says:

          This.

          “Falsifiable” doesn’t mean “there exists some event which will happen that is strong evidence against this hypothesis” but “There is some possible observation that COULD be strong evidence against this.”

          1+1=2 is falsifiable. If I take one shoe off and put it in an empty shoebox, then take my other shoe off and put it in the same shoebox, and as a result there are three shoes in the shoebox without any other shoes being put there, that would qualify as evidence that 1+1=3, which combined with other facts about math would count as evidence against 1+1=2.

          Personally, I’d consider a lot of other hypotheses the first time I thought I added one to one and got three. That’s because I’m actually more confident that 1+1=2 than I am confident that my memory and my senses are a generally accurate summary of events that actually happened.

          It would take me a neutrino seeing fewer instances of before doubting that special relativity generally explained and/or predicted all observations relevant to it.

          • dionisos says:

            1+1=2 is strongly unfalsifiable.

            If you do your experience with shoes, or any experiences of this kind, you should just change the credence of the law of physics you believed in.

            You can’t never disprove 1+1=2 like this because what it means to “add” in the physical world is dependent of this very property.
            By example what does it means to have “3 shoes” in the box ?
            It means you “counted” one shoes, then “another one” and then again another one. Which you could translate like 1+1+1=2+1=3.
            I mean, you could never count “3 shoes”, without using 1+1=2

          • The original Mr. X says:

            I mean, you could never count “3 shoes”, without using 1+1=2

            Though I did once attend a stage hypnotist’s show where he made a woman believe that the number seven didn’t exist. When he asked her to count her fingers, she went straight from six to eight and totally freaked out when she got to eleven.

          • deciusbrutus says:

            No, having three shoes in the box could mean that there is one shoe in the middle and one on either side. That’s how I count three, five, and seven similar items: By finding a central object and then seeing one, two, or three similar objects on either side of the central one. (I can’t precisely count eight or more by sight alone, but after a moment I can be sure that it IS eight or more).

            I can totally 3 objects without using S(S(S(0))).

          • The original Mr. X says:

            Laying debates about how exactly we count objects aside, no, putting two shoes in a shoebox and finding you have three shoes wouldn’t prove that 1 + 1 = 3, it would prove that there’s something weird happening causing an extra shoe to appear when you put your two shoes in the box.

          • deciusbrutus says:

            Adding one to one and getting three is the most exemplar of evidence that 1+1=3.

            I’m more confident that 1+1=2 than that any given shoebox is currently empty, even if were one that I remember just assembling out of cardboard that has been in my hands since construction.

            Part of that is that if 1+1=3, my memory and sense data should be treated as lies of Satan, put there to confuse me. The existence of some being capable of generating false memories of mine cannot be falsified against its will, if it is Sufficiently Advanced.

          • Jaskologist says:

            Try the experiment with rabbits instead of shoes.

          • dionisos says:

            Adding one to one and getting three is the most exemplar of evidence that 1+1=3.

            If 1+1=3, then “3” just mean “2”.
            And if 2=3 it only means “2” or “=” or “3” doesn’t mean what it means in natural number.
            You can never have any evidence than 1+1=3, because this make no sense.
            The most you could do is have evidences that every time you “add” something, and “add” another thing, and then “count” those things, you end-up counting “3”.
            But this isn’t a evidence, at all, that “1+1=3”. It only means that the “add” operation you use when you “add something”, and the “add” operation you use when you “count”, aren’t related in a simple way.
            (it probably also mean you are living in a universe with very strange law, but this is another question).

            Part of that is that if 1+1=3, my memory and sense data should be treated as lies of Satan, put there to confuse me. The existence of some being capable of generating false memories of mine cannot be falsified against its will, if it is Sufficiently Advanced.

            But I think this is the same kind of error than confusing the language with the meta-language.
            If your thinking is messed, you can think, and then think about you thinking taking that into account.
            You can count “3” and then consider “each time “I count 3” it means “I counted 1+1″ ”
            But if you remove the language/meta-language difference, you are just changing the definition of things, not saying something substantial.

          • deciusbrutus says:

            If the universe is such that adding and counting use different ordinals, then “1+1=2” is certainly not true.

            In such a universe, it might be not even false.

            I don’t know if I would consider a statement that is not even false to be falsifiable, but it’s certainly not in the category of “things which so fundamentally true that it’s hard to imagine what evidence against them would look like”.

          • Paul Brinkley says:

            If 1+1=3, then “3” just [means] “2”.

            That’s a big problem, though. If you infer that 3 = 2, it then means 1 = 3 = 1+1 = 1+1+1, which eventually implies all integers are equal to each other, and all numbers as well.

            Which I guess is great news for people wanting to solve economic inequality, up until they run into people insisting that they should be able to trade their pocket lint for a fleet of yachts. It’s also great news for people trying for an easy engineering degree, so long as they’re later okay with bridges collapsing – and maybe they will be, because they’ve equated collapsing with staying up, a hundred dead people with a hundred live people, and so on.

            In other words, just assuming 3 = 2 throws all your existing math out of whack. Nevertheless, if you add 1 and 1 and get 3, without being able to explain it as a human error, you would like to know that, since it implies something important about your pocket lint, not to mention your yacht business, your daily commute, and your moral position on death and suffering.

            1+1=2 is indeed falsifiable. There is an underlying claim that + and = are not just being tricksy, and that they will never give you 1+1 = 3, let alone that they’ll do that because you didn’t tithe enough money last Tuesday. The theory is that if they ever do do this, then there’s some serious problem with the way math works at that level and we’re going to want to address it, and unless that day comes, we can still rely on 1+1 not suddenly equalling 3 or 100 or a Canadian waffle.

            Or in general, as Faza says, falsifiable claims are useful because they allow you to classify events as “impossible, provided these claims are true”. This is especially useful if your hindbrain stubbornly insists that those events are undesireable.

          • dionisos says:

            Or in general, as Faza says, falsifiable claims are useful because they allow you to classify events as “impossible, provided these claims are true”.

            But this only works for claims about the world, and in particular for empirical claims.

            “1+1=2” don’t say anything about the world, it is only a step in a particular kind of deduction. It is compatible with all possible worlds.
            And this is exactly why it is unfalsifiable.

            The “+” and the “=” have completely abstract meanings, if some “operation” we intuitively think as a “1+1=2” (like adding 2 shoes and then counting 2 shoes in the same box). don’t correctly map with our experiences, it only means our mapping is wrong (for the “1”, “2”, “+”, or the “=”).

            In fact we are thinking of “adding shoes in a box”, as a addition, exactly because we believe it to follow a similar structure that the additions of natural numbers. (and this could be wrong, and is falsifiable in very bizarre worlds, but not 1+1=2)

          • kaathewise says:

            @dionisos

            “1+1=2” don’t say anything about the world, it is only a step in a particular kind of deduction. It is compatible with all possible worlds.
            And this is exactly why it is unfalsifiable.

            I get what you are saying, but although arithmetic is not tied to the real world, it itself represents a theory about the virtual world governed by logic, and as such it is falsifiable.

            In particular, there might exist a proof that 1+1=3, or that arithmetic (or some formal system containing it) is inconsistent, i.e., it is possible to deduce 2 incompatible statements from it.

            We do hope that it is consistent, but as Gödel’s Second Incompleteness Theorem shows, we cannot prove it, and thus it is theoretically possible that there exists an example proof that shows that arithmetic (or some formal system containing it) is inconsistent, and hence false.

  79. DocKaon says:

    I really wish people would at least understand that the Many Worlds Interpretation of Quantum Mechanics has nothing to do with the multiverse theories that people are criticizing. They’re completely orthogonal issues. The Many World Interpretation of Quantum Mechanics is about how you relate the mathematical apparatus of quantum theory to a description of the reality. Until the different interpretations are able to actually produce testable predictions different from each other it’s a philosophical problem completely divorced from any empirical resolution. This not what people are complaining out.

    The multiverse theories that people are criticizing are cosmological multiverses. They deal with different causally separated regions of space-time. These multiverse theories are of interest to many theorists, because they deal with the basic issue with the development of string theory which is that it has a vast number of potential solutions with no way to choose between them. You could in principle have string theory solutions which include all the physics we currently know plus additional physics, but that additional physics covers such a wide range of possibilities that it would be consistent with almost any new experimental result. It also contains a vast range of possibilities which are completely inconsistent with the universe we observe. Multiverse theories come to the rescue by saying you have all possible solutions somewhere in the multiverse and anthropic reasoning explains why we live in a universe compatible with our living. What’s left is a theory that is so flexible that it makes no predictions that could falsify the theory.

    In my opinion, this is a completely degenerate research program which should be abandoned by physicists. Unfortunately, what instead seems to be happening is that theorists who have dedicated their careers to this research program are instead trying to twist the definition of science to make it acceptable. They are able to push this because their opposition doesn’t have a successful alternative theory. All they have is that what the string theorists have been doing hasn’t worked.

    • sclmlw says:

      So instead of redefining science as something that does not require hypothesis testing, let’s call things like this that they are: philosophy. Sure, science is a branch of philosophy, but not the only branch and not the only useful branch. It’s just the branch we apply when reasoning in a certain way about observations in the natural world. Maybe we could all calm down for a second and admit that an idea we haven’t yet figured out a way to test isn’t useless, even if it isn’t science. People can continue working on it, and indeed teams of PhDs can work their entire lives in an effort to bring it to the point where it becomes a testable and scientifically valid theory. Until then it should continue to be considered philosophy.

  80. HMSWaffles says:

    “Satan makes the dinosaur fossils to fool snooty people with research degrees” is a non-systemic attempt to “explain” *why* there might be fossils, not *how* they all came to be where they are, in the shape that they are in, and what they are made of.

  81. 1 says:

    One way to make the fuzzy notion of elegance/simplicity rigorous is to use Kolmogorov complexity and algorithmic randomness. Roughly, the prior probability of a hypothesis is inversely proportional to its complexity. So in your archaeology/satan example the satan hypothesis gets lower prior as it’s as complex as the archaeology hypothesis. Since the empirical content of both are exactly the same, the archaeology hypothesis gets higher (Bayesian) probability.

    • Jaskologist says:

      You say this is the rigorous way to determine simplicity, but you didn’t actually use it. All you did was declare that one was more Kolmogorov complex than the other. Prove it.

      What’s the actual Kolmogorov complexity of the Satan hypothesis, and what’s the complexity of the old earth hypothesis?

      (Good luck.)

      • 1 says:

        I’m well aware that the Kolmogorov complexity is uncomputable. However, that doesn’t mean it’s impossible to reason about. Here the Satan hypothesis necessarily includes the archaeology hypothesis, so any program for the Satan hypothesis has to be longer than the archaeology hypothesis. So the Satan hypothesis will have higher Kolmogorov complexity.

        You seem to be suffering from the misconception that uncomputable means useless. But we can still reason about it, e.g. we can derive upper bounds which are computable.

        • Jaskologist says:

          Back in the old days of Star Trek: The Next Generation, the script writers used to put “(TECH)” into the scripts, to indicate that some sort of technobabble was to be inserted there for the characters to say. Very often, this was the resolution of the whole episode’s plotline. The details, it turned out, weren’t actually important; the conclusion had already been determined.

          I’ve read the sequence where Yudkowsky claimed that we could prove which thing was simpler by using K-complexity. But I noticed that he didn’t actually then proceed to use K-complexity. He simply declared that his preferred hypothesis was obviously the less K-complex one, because he was super-sure it was simpler. K-complexity is just technobabble to make an already-assumed answer sound more impressive.

          I have seen his followers reference K-complexity many times. I have never seen them actually calculate it. It’s pretty curious to have a metric and then never measure by it.

          So shut up and multiply. I am not at all convinced that the program to simulate a guy burying some rocks is more complex than the program to simulate the lives and deaths of massive populations of diverse species and the complex geological processes involved in (sometimes!) very slowly replacing the organic bits with minerals.

          I was promised a rigorous method. So let’s see the rigor.

          • deciusbrutus says:

            A planet of age 0 (that is to say, a Big Bang) loses no complexity over time.

            Any universe which appears to be the result of a Big Bang can be no less complex than the one which simply is.

            It is possible to reason in supernatural elements which CAN make the universe look like or unlike one that arose from physical forces, but if they can make the universe look like it arose from physical forces they must be at least as complex as those physical forces; and if they can choose to make the universe look otherwise then they must be more complex.

            Physical determinism is *simpler* than anything that can choose to be indistinguishable from physical determinism.

          • dionisos says:

            I think you are missing two important things :
            – The Kolmogorov complexity doesn’t give us a precise way to measure complexity, but it give us a precise meaning of what we want/search/speak about.
            It is true of a lot of concepts : truth, probability, credence, rationality, …
            We have no general way to measure most, but it is still important to give them a precise meaning.

            – Even if the Kolmogorov complexity is incalculable in general, you can still have the complexity of some data, and you can still reason about it, and you can still have heuristics and think in term of probability.

            By example I think it is very probable that K(pi)+K(sqrt(2)) > K(pi).
            And this is the same thing with Satan : You should have a program for how all the other stuffs are working + for how Satan is working, which is probably a bigger program than just the other stuffs without Satan.

          • Jaskologist says:

            And this is the same thing with Satan : You should have a program for how all the other stuffs are working + for how Satan is working, which is probably a bigger program than just the other stuffs without Satan.

            Both of our programs already contain billions of intelligent agents capable of creating sculptures and using shovels. Yours adds a lot of stuff about the mating habits of trilobytes. Clearly that’s the bigger program.

            Do you see why this is not at all a rigorous method? We’re just playing word games to try to define the hypothesis we don’t like as “bigger.” There’s no proof or real math behind any of it.

            Physical determinism is *simpler* than anything that can choose to be indistinguishable from physical determinism.

            This is a matter of faith but not yet demonstrated, and certainly not relevant to the current question. We do not have a universal equation that we can run to predict the locations of T. Rex skulls.

            To repeat, none of this is rigorous. If I said that I had a rigorous proof that P != NP, and the proof was, “I’m pretty sure it’s harder to decrypt stuff than to encrypt it QED,” I would be laughed right out. This is the same thing. We’re just cloaking our intuitions in fancier language.

        • TheAncientGeeksTAG says:

          But the other problem is that this kind of argument always assumes that the alien/witch/satan is some sort of complex entity with working parts. It therefore fails to refute *actually* supernatural hypotheses, because supernaturalism posits that there are (possibly only are) entities that are inherently intelligent etc.

  82. eyeballfrog says:

    Alternatively, you could just believe in Bohmian mechanics, where the entire universe is deterministic and there’s no more measurement problem.

  83. Ransom says:

    I am surprised that no one has posted this link yet (or maybe I just missed that thread?):
    https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics
    There are many, many interpretations of QM. The many worlds one is not among the most popular, as I understand it. Copenhagen is very popular, because of it’s simplicity, but yes, it seems a bit ad hoc. The idea that the collapse postulate is a result of destructive quantum interference whenever a measurement is made is also popular. (There seem to be several versions of this idea.)It has the advantage that it seems more “closed”, less ad hoc. But they all lead to the same results for any actual quantum calculation.

  84. nobody.really says:

    A third bad response: “Satan is supernatural and science is not allowed to consider supernatural explanations.” Fine then, replace Satan with an alien.

    Great–another job lost to an alien….

  85. 4thwaywastrel says:

    To me predictability and control seems like a bigger factor than simplicity. Any just theory can describe and explain, but being able to make predictions and ultimately control a process gives a theory a lot of weight.

    Seems like sphere example would be a “weighty” theory as it would be able to predict properties of yet unknown elements.

  86. Markk says:

    Consider a classic example: modern paleontology does a good job at predicting dinosaur fossils. But the creationist explanation – Satan buried fake dinosaur fossils to mislead us – also predicts the same fossils

    This is *not* the creationist position. The creationist position is that dinosaurs survived Noah’s Flood and went extinct shortly afterwards.

    I’m not a creationist and I probably shouldn’t care, but it annoys me that Scott, rather than use the actual creationist position, instead went “look at the stupid outgroup”.

    • Jaskologist says:

      Does this mean we get to report the original post for failing the “true, necessary, kind” test?

      • Markk says:

        I don’t know.

        What’s interesting is that creationists are aware of the issues raised in this post. So rather than (just) say that the evidence supports their views, they’ll say that the evidence in both cases is the same – it is the presuppositions and interpretations of the evidence that differ. From there they attempt to re-conceptualise the science.

    • deciusbrutus says:

      It’s not *the* creationist position. It’s certainly a position that actual people seriously propose, and those people are creationists.

      • Markk says:

        It’s not a position taken by the major creationist organisations, such as Answers in Genesis or Creation Ministries International.

  87. deciusbrutus says:

    The problem with Many Worlds is that the summary doesn’t match the math.

    I know this only because people who understand the math are constantly telling people who are drawing inferences based on the summary “No, it doesn’t work like that at all.”

    In rocket engineering, the summary does match the math. I know this mostly because when people who understand the math interact with people who draw inferences based on the summary, they typically say things like “That would cost too much” or “We don’t have a material quite strong enough for that, but we’re working on it” or “Your narrative timing ignores the speed and capabilities of actual spacecraft”.

    To compare, if psychiatry were like rocket science, you would see science fiction say things like “I caught depression, but then I visited a psychiatrist and sat on a special couch and talked and was cured.”

    If psychiatry were like Many Worlds, you would see science fiction say things like “Depression was always fatal before the invention of the psychiatric couch, but we have no way of knowing how many deaths it caused because it was also impossible to diagnose without the upholstery.”

  88. Carl Pham says:

    Well argued, but it reminds me of a lot of philosophy essays in that it argues carefully and completely the preliminaries, then kind of gives up when it gets to the main event — if indeed the main event has anything to do with multiverse hypotheses (which it may not, I may have read that wrong, or projected where my ultimate interest would lie).

    The difficulty is that “simplicity” (or equivalently “elegance”) is very much in the eye of the beholder, and “familiarity” can readily be mistaken for it. A crude example: for assorted reasons, I have never become familiar with using Microsoft Word. In several decades of professional work, I have managed to entirely avoid its use (for published stuff I’ve always used LaTeX or one of its variants). People assure me very sincerely that it is a “natural” and “easy” application to use — much more so than LaTeX, of course — and that everything is “intuitive” and “logical.” And yet every time I sit down to even try to use it, I find it baffling, illogical, and opaque. I’m sure in time I could become as familiar with it as anyone else, and then I, too, might think it simple and logical.

    But I’d be wrong. Surely if something is genuinely simple, logical, and intuitive, that should be most apparent to the complete amateur, the person who has never seen it before. That things appear simple and logical only after you have been thoroughly familiar with them is just the human “System 1” tendency (to borrow Kahneman’s terminology) to replace difficult questions (“Is this elegant — universally simple?”) with superficially similar easy ones (“Is this familiar — does it seem simple to me?”) and conflate their answers.

    So how are we to judge whether the multiverse, say, or indeed any other untestable hypothesis, is more or less “elegant” than some other hypothesis — and be sure we are not confusing familiarity with elegance? (Just for example, I personally find the multiverse solution of the quantum measurement problem to be laughably unlikely, a Roccoco assemblage of crystal spheres that would leave cartoon Ptolemy struck dumb in admiration. Almost any other solution to the problem, including simply shrugging and saying clearly our understanding of quantum mechanics is incomplete, would be simpler and, to my mind, far more elegant.)

    This is the heart of the issue. How are we to distinguish elegance from mere familiarity? It’s not just a philosophical question, either. There are a host of advances in science that have only occurred when some breakthrough spirit was able to take a step back from what was familiar, and consider the unfamiliar and strange as a potential explanation. Once it works, then of course everyone says “oh that was so elegant” — but arguably it was the immediate pre-conscious dismissal of the unfamiliar as inelegant that prevented the discovery from being made by someone else. Examples abound: Einstein being willing to reconsider the concept of absolute universal time, leading to special relativity, and of absolute space, leading to general. Dirac being willing to reconsider the idea that energy could not be negative, leading to antiparticles. Lavoisier being willing to reconsider the idea that air was an element , leading to the demise of phlogiston, the discovery of oxygen, and the correct theory of combustion. Thomson being willing to reconsider the possibility that heat was not a substance at all, but the excitement of invisible degrees of freedom, which lead to the demise of caloric and a sound basis for thermodynamics. Surely there are examples in other fields — medicine, psychology, psychiatry.

    Indeed, I would argue that the tendency to confuse familiarity with the simplicity of truth is so powerful in the human psyche that the judgment of simplicity or elegance is a priori not to be trusted, when made by someone of our species. We are simply far too apt to fool ourselves about that. So this judgment, while in principle spot on as a primary way of assessing competing hypotheses in the absence of empirical testability, is one that in practice we are almost all — except for the Einsteins and Diracs among us — incapable of making, at least prospectively, and even in hindsight only when consensus adds social pressure to nascent familiarity.

    • Viliam says:

      There is a “map and territory” distiction, whether something is simpler per se (this is what people are pointing at when they talk about computer simulating the thing), or whether for practical purposes some method is simpler to use in situations where its outcomes are almost identical to the theoretically perfect outcomes.

      For example, when objects move with small speeds, using Newtonian physics is simpler than using relativity. (And when objects move with large speeds, we have to use relativity, because we don’t have a choice here.) However, a universe with relativistic physics is simpler than a universe with both Newtonian and relativistic physics.

      Similarly, when talking about large objects, using classical physics is simpler than using quantum physics. But a universe with quantum physics is simpler than a universe with both classical physics and quantum physics.

      I believe this argument does not depend on familiarity. It’s just saying that “X” is simpler than “X and Y”. Regardless of whether “X” or “Y” is considered more simple than the other.

      The argument is not “from certain point of view, relativity feels simpler to me, and you should respect my point of view”. Instead, the argument is “if you want to describe the whole universe including the fast-moving things, you gotta include the relativity anyway, and it is simpler to just stop there, instead of also adding Newtonian physics as yet another set of laws, and arguing vaguely about which set of laws applies where”.

      This does not object against using Newtonian physics as a convenient approximation, useful whenever things move slowly and we don’t care about too many decimal places. It just says “it is merely a very useful approximation, but in fact, things move according to relativity even when they are slow; even when the difference is not measurable in practice”.

      Similarly, if you want to describe the whole universe, including subatomic particles, you gotta include quantum physics anyway, and it is simpler to just stop there, instead of also adding classical physics as yet another set of laws, and arguing vaguely about which set of laws applies where.

      This does not object against using classical physics as a convenient approximation, useful whenever things are big, we don’t care about too many decimal places, and we don’t do weird things such as build quantum computers. But classical physics is merely a useful approximation; the universe is actually built on quantum physics at all levels.

      …if you accept this, then “by the way, there are other universes” falls into the same category as “by the way, the twin that stays on Earth will be older than their space-faring sibling” or “by the way, there are zillions of stars, many of them also with planets” or “by the way, the Earth goes around Sun”. A weird fact, but a logical conclusion of a simple theory. We do not artificially complicate the theory (by postulating a quantum collapse, epicycles upon epicycles, or Satan hiding dinosaur bones) in order to protect ourselves from the feeling of weirdness.

      EDIT:

      If I am very uncharitable, the Copenhagen interpretation could be described as: “Yeah, the nature is quantum, but only inside the laboratory; or more precisely only inside the experimental equipment.”

      • Carl Pham says:

        Sure, I agree a universe with just quantum physics is simpler than one with a whole other physics grafted onto it, and if quantum could explain everything, we’d be good. But it manifestly can’t. Relativity is a classical theory, and unless you want to argue relativity is unnecessary, you are stuck with a description of the universe which is part quantum, part classical, and no known way of joining the two in the case of spacetime itself (i.e. the problem of gravity).

        Also, I don’t agree for a moment that accepting quantum mechanics implies acceptance of the many worlds interpretation. That’s just something grafted on top of the math, which is plain enough. It’s to satisfy our urge to answer the question “what’s REALLY happening” with something that is familiar seeming to our classical macroscopic experience.

        • sovietKaleEatYou says:

          In nonrelativistic situations quantum does agree with experiment to a ridiculously good approximation (and there are situations where it can be successfully combined with relativity to some extent). Historically, this has meant that a “unifying” theory should specialize to both quantum and to relativity in different limits.

  89. mika says:

    I wish every non-expert (and even some of the supposed experts) talking about QM MWI would remember that

    It should be emphasized, though, that this is just bookkeeping, not a real separation between “copies of the universe,” or even copies of the system of interest. There’s only one universe, in an indescribably complex superposition, and we’re choosing to carve out a tiny piece of it, and describe it in a simplified way.

    Many Worlds, But Too Much Metaphor

    From the above article (emphasis mine)

    Rather than “Many-Worlds Interpretation,” I’d go with “Metaphorical Worlds Interpretation,” to reflect the fact that all the different ways of cutting up the wavefunction into sub-parts are fundamentally a matter of convenience, a choice to talk about pieces of the wavefunction as if they were separate, because the whole is too vast to comprehend.

    • Gerry Quinn says:

      But the superposition interferes destructively (decoherence) except in a subset that corresponds to a branching set of classical universes.

  90. mika says:

    Speaking of the regular cosmological sort of a multiverse, Scott quotes Sean Carroll, so I’ll raise one Peter Woit.

  91. Wouter says:

    Don’t apologise, that sphere formula example you gave was brilliant, and EXACTLY how some of the most profound breakthroughs in physics came to be.

    I am having fun in my mind coming up with example after example, but I’m on my phone so not gonna do it unless somebody asks 🙂

  92. RC-cola-and-a-moon-pie says:

    This is probably late in the day (deep in the discussion column) to hope for a reply, but after reading all the very interesting comments in this discussion, there are two points that have come up several times that I didn’t notice answers to, and about which I would love to hear from proponents of the many worlds interpretation:

    1. That the inability so far to derive the Born rule within the MW framework eliminates the perceived conceptual advantage and simply re-inserts the same ad hoc character that is perceived in Copenhagen. Is this false? [Edit: Maybe it isn’t the Born rule per se that is the issue but rather more general problems with probabilistic concepts.]

    2. That Bohmian mechanics presents a third interpretation (I understand there are others as well) that obviates the ontological complexities of MW that at least deserves careful consideration before leaping to a fairly extravagant-seeming understanding of how to answer the measurement-anomaly problem.

    • name99 says:

      Bohm is a waste of time, a red herring, because it doesn’t in any useful way extend to quantum field theory.

      Born rule is a lot more natural if you start off from a different set of postulates from standard QM. Of course you land up at the same place, but if your starting point is not Hilbert spaces but measure theory, and how would you naturally extend that to measures over vector spaces, then something like Born is a reasonable part of the mathematical structure.

  93. VampricallyValidated says:

    A third bad response: “Satan is supernatural and science is not allowed to consider supernatural explanations.” Fine then, replace Satan with an alien.

    Aliens planting the fossils would be just as unscientific given our state of evidence for any actually existing and dismissing the supernaturally distinction like this doesn’t feel very convincing, while we can say it’s a just a higher degree of pseudoscientific reasoning there’s also the saying that a quantiative difference of sufficient magnitude is a qualitative difference.
    Something that requires that our model of nature itself to be modified radically so that the proposed theory could even be meaningfully expressed in it is qualitatively different than one that merely requires expanding our existing model without adequate evidence.

    Note how classic scifi works aim to achieve suspension of disbelief by selectively yet elaborately addressing how and why their premise is supposed to make sense while high fantasy works usually steer away from it.

    A single alien planting the fossils with his own powers would be supernatural, a single alien equipped with advanced technology slightly less so and a group of aliens from a technologically advanced civilization planting the fossils (on an alien CFAR team-building retreat for AlienCo ?) is an smbc comic waiting to be made – but that’s because our notion of aliens here is just weird-looking humanoids with more science so we mentally gloss over the fact we have zero priors for there actually existing any and extrapolate from our own society to counterfactually reason about them – but what is our reference model for reasoning about Satan ?

    ” I think this is a stupid distinction – if demons really did interfere in earthly affairs, then we could investigate their actions using the same methods we use to investigate every other process.”

    Well no, we really couldn’t, at least not the with the powers and motivations traditionally attributed to demons – we would have no notions of causality and hence no reproducibility or inference beyond cogito ergo sum.

  94. StellaAthena says:

    I think this is a very reasonable observation. Philosophers sometimes refer to this as scientism, the dogmatic belief the science is the only mechanism by which one can learn about the universe. This attitude has become increasing popular over the past couple years and is widespread in the US today. Paul Feyerabend is a prominent 1900s critic of the special prestige of science in our epistemology.

    One place you can see this very clearly is in discussions of transgender people. Different sides are quick to argue that science says that they are right and a not insignificant amount of the debate is over whose science is done better. You also see this in political and economic debates sometimes, where people use data and science to wage a proxy war over what is really differing foundational philosophical beliefs.

    By contrast, in the past people had many independent sources of knowledge about the world and reasoned about which were most useful in different contexts. You can see reflections of this in several Platonic dialogues and the Symposium. You can also see it in Judah Halevi’s Kuzari, which tells the story of how the king of the Khazars decided between Christianity, Islam, Judaism, and philosophy as systems of knowledge. Roughly speaking, the book presents three separate sources of knowledge: divine revelation, received tradition, and rational thought. Different belief systems put different weights on these sources of information, but all are considered as having a mix of all three. The Kuzari explicitly argues that some questions, such as the immortality of the universe, fall beyond the realm of reason and therefore must be taken up with other sources of information such as prophetic revelation. This podcast episode presents an accessible gloss of this work.

    One interesting facet of this phenomenon is attitudes towards mathematics, computer science, and statistics. These fields are (mostly) non-empirical but are clearly important to the practice of science. When I participate in lay conversations about epistemology (I say speaking as someone who has a degree in philosophy and leverages philosophy in my work), this often causes consternation for people. Personally, I would rather group those fields (along with philosophy and possibly a couple others) as their own type of knowledge investigation, one characterized by a priori reasoning and the primacy of logical argumentation and proof as the methodology for arriving at conclusions.

    When trying to classify fields of knowledge, I think methodology is a major thing to focus on. One major reason for this is that it’s clearly the methodology of “science” that justifies science’s major or exclusive role in reasoning. I am a mathematician and a philosopher, not a scientist, but it seems to me that fields commonly labeled as “science” are not necessarily methodologically connected. Indeed, even “a single field of science” can have areas where different methodological approaches reign. Economics (micro vs macro) seems to be a great example of this, as does political science (theory building vs computational). Contrastively I don’t think the subject matter is very important at all, as evidenced by my categorization of philosophy and mathematics together.

    A surprising place where you see people grappling with this issue is the contemporary social justice movement. “Lived experience” is given a special epistemological status not granted to any mere anecdote, and people who have experience on the receiving end of systems of oppression are understood to have a more important understanding of the phenomena at hand. How distinctive this is depends on how exactly it’s described, but my experience in activist spaces and speaking to activists has given me the strong impression that it’s wrong to just dismiss this as an overemphasis on anecdotal evidence or in-group prestige. It really does seem to function to me like a different (and possibly new) form of epistemic reasoning.

  95. Felicidad Unir says:

    Epistemically, as ever, we are all slapdash coherentists posturing as conscientious foundationalists.

  96. carsonmcneil says:

    (I am tempted to say “simpler”, but that might imply I have a rigorous mathematical definition of the form of simplicity involved, which I don’t).

    Oh, here you go, it’s called VC-dimension, and it is proven that between two models with equal training set performance, the generalization error is lower for the model with lower VC dimension. It is pretty easy to see that “god-based” theories have very high VC dimension because God can do anything, so no amount of data can shatter the god hypothesis space.

    https://mostafa-samir.github.io/ml-theory-pt2/

  97. Freddie deBoer says:

    I will say this: the odds that the multiverse theory will be one day proven laughably wrong are vastly, vastly higher than the same odds for, say, the claims of someone using a telescope to study the telemetry of a comet. And that strikes me as important.

  98. AlexanderTheGrand says:

    It’s an interesting point that, when all the evidence is already known, there’s no such thing as a “predictive” theory. But statistics/machine learning has a partial answer to this question.

    You can divide your evidence into “training” and “testing” sets. Given a meta line of reasoning, would you have been able to come up with the specifics of the theory to predict the “testing” evidence from just the training evidence?

    For example, take the spherical example in this essay. Imagine there are really 40 particles of this form. If you saw 20 particles that followed this pattern, it’s reasonable to think you could have come up with the “right” formula, which would successfully predict the remaining 20 particle masses.

    It doesn’t work for all of these, but it’s helpful in certain cases.

    • carsonmcneil says:

      Yes there is such a thing as a more predictive theory in these cases! The same field has mathematically rigorous statements you can make about generalization error. I think that even if we have all the evidence we are ever going to get, it is still meaningful to say “WERE we to get more data from the same generating distribution, this model is more likely to predict it well”. No, you can’t actually sample and test the distribution, but there are still statements you can make about it, and preferring simpler models is one implication of thinking about that.

  99. Mark F says:

    I think what is going on here is really a disagreement about which explanation is the most parsimonious. Jim Baggot, the author of the Aeon piece, is accusing many-worlds proponents of making up stuff without evidence – precisely of “multiplying entities without necessity.” But the entire motivation of MWI is that wave-function collapse is an unnecessary addition to the theory. There are reasonable arguments on both sides of whether wave-function collapse is a bigger wart on a theory than the postulation of parallel universes. Neither is tantamount to Satan trying to trick us, and both of them get uncomfortably into the realm of philosophy. I think most physicists understand that this issue is not fully understood. I think many physicists don’t assume that wave-function collapse is a thing that happens in the world. Some people call this the “shut up and calculate” interpretation. The idea is that we have a mechanism for predicting the results of experiments, and we admit that we don’t understand some parts of it. I don’t see this as crazy, and I don’t see MWI as crazy either.

    But I’m not sure Baggot himself realizes the nature of the debate he’s engaged in, since he formulates his criticism around the idea that many-worlds proponents have a completely novel idea of how science is supposed to work. I wonder if he’s been reading the blog “Not Even Wrong”, which takes similar positions but actually understands the physics.

  100. name99 says:

    Let’s be honest here. This is not REALLY about “how to do science”, it is about politics (as most things are).
    Specifically the problem is that the real scientists say “I can’t define a good theory, but I know it when I see it”; and then various twits claim “well I can see that my theory of time vortices is a good theory”. What do you do?

    Option a is to admit that some people are real scientists, others are not, and that’s that. But that runs into multiple issues, from the dumb (“who are you to say that not everyone can be a scientist?”) to the marginally legitimate (credentialing, social pressures on scientists, …); with the basic POLITICAL problem that Western society is ever less willing to accept that in fact some people do just have better taste (not only more education, not only more credentials) than most people when it comes to theories.

    Option b is to say we don’t have to worry about all the problems inherent in a because we can create a SYSTEM, the SCIENTIFIC METHOD that does the choosing for us. That is why Aeon is pushing this nonsense, as are so many others. Doesn’t matter whether or not it works, what matters is that it appears to solve the problem (enough so that you can then deny there is a problem…)

    As such this is just the latest version of a long tradition. The replacement of virtue ethics by deontological ethics was an early step. Compulsory sentencing laws are a much more recent manifestation. Always the same thing: “if we allow for human discretion, well maybe the humans will be imperfect? we need a SYSTEM”

    Is this progress? I don’t know about for law or ethics, but for physics I think it would be a catastrophe if it were actually actualized. We do actually need a few people with physics taste setting the agenda and deciding what counts as a good vs not good theory. And I say that even as (one version of) the multiverse/string theory/super symmetry theory seems to have headed off a cliff quite a few years ago. Longer term (give it another two generations or so) I have more confidence in good taste realizing this was a bizarre detour in physics history than I do in “Bayesian” physics uncovering further real understanding.

  101. pontifex says:

    I think Karl Popper was on to something important when he talked about what is and isn’t science. Of course, just like with everything related to philosophy, we quickly bog down in the ever-shifting quicksand of word definitions. If my theory about dinosaur bones matches up with some other things we know about dinosaurs, does that mean it has been “tested”? Or do I actually need to see a dinosaur with my own eyes to count as testing it?

    Clearly, if you think it’s the second one, big parts of science become “untestable”… at a minimum, anything dealing with the past, and anything dealing with cosmic distances. But I don’t think that’s how most scientists would look at “testability.” For example, people routinely talk about Einstein’s ideas about black holes being “validated” by later work even though nobody ever visited a black hole in person.

    And of course, you can have “degenerate” theories that explain everything without really providing any insight. The world was created one second ago as a test of how someone like you would feel about a webpage like this one. An evil demon controls everything you see to mislead you. And so on. A proposition can be logically well-formed and mathematically un-disprovable, but still useless for understanding the world. Ultimately, we are trying to build a consistent world-view, and these propositions don’t help us (to put it mildly.) In fact, they actively hinder us. So while they’re fun to think about a few times in philosophy class, they aren’t science… and hopefully aren’t part of your daily thought patterns, because that would be a bad sign for your mental health.

    Honestly, I feel that the “many worlds” interpretation of quantum mechanics is not yet science. Maybe one day it will be science, if we find a way to interact with a parallel world somehow. It wouldn’t be the first concept to make this leap. Philosophers used to opine about whether matter was continuous or discrete, without any real way to validate either position. That changed eventually, of course.

    By the way, I think you are pulling a motte and bailey with the paleontology comparison. I don’t think fundamentalist Christians would argue the position “an evil demon created everything to make paleontology look true, even though God created the Earth 4,000 years ago.” Rather, they try to find arguments based in concrete evidence for why the Earth is 4,000 years old. People intuitively recognize that the “a demon is manipulating your perceptions” argument is not actually strong in practice, even though it’s philosophically un-disprovable. They look for evidence.

    (To be clear, I’m not saying they never make the “evil demon” argument, just that it’s the motte. Not what they actually believe, just something that clearly can’t be disproved, by its very nature. They then quickly go back to the bailey: “look at all of these artifacts proving that Noah’s flood happened!” etc.)

    • dionisos says:

      Honestly, I feel that the “many worlds” interpretation of quantum mechanics is not yet science. Maybe one day it will be science, if we find a way to interact with a parallel world somehow. It wouldn’t be the first concept to make this leap.

      To my point of view, this is very similar than asking we interact with dinosaurs or visit a black hole.

      Things like the double-slit experiment, the bell’s inequality or the EPR “paradox” are like the fossil of the wave function.
      Now if the wave function behaves like we think it behaves, without any (in my point of view ill defined), collapse when we “measure” it or when it interacts with “macro stuffs”, then we end-up with MWI.
      And MWI explain what we see as well as Copenhagen.

      Maybe there are better interpretations (people are speaking about other interpretations than MWI or Copenhagen in the comments).
      But it still seems like science to me.

      • pontifex says:

        Dinosaurs did interact with us, though. Just to give one example, if they hadn’t existed, birds wouldn’t exist in their current form. Black holes interact with us too. We can see their presence by the effect they have on nearby stars and light.

        Parallel universes don’t interact with us, by definition. If I tell you that somewhere there is a parallel universe where the Narnia books are all true, but we can’t ever reach that universe (and it can’t reach us), what kind of scientific conversation can we have? You can’t prove that it’s false. After all, by definition, the parallel universe never has any effect on this one. You can’t prove that it’s true, or gather evidence for it. Until we can interact with the parallel universe in some way, this hypothesis is not science.

        People are very good at coming up with stories and crazy hypotheses. Philosophers spent a long time talking about the properties of stuff that never existed, like gryphons and unicorns. People love a good story. We have to keep ourselves focused on science with conscious effort. If you lose concentration then you quickly veer off into the old, unproductive modes of thought.

        • dionisos says:

          People are very good at coming up with stories and crazy hypotheses.

          But the MWI isn’t a crazy hypotheses, it isn’t some idea we came up with because it was fun but could not prove.

          Black holes interact with us too.

          The past of the black hole interact with us, but now the black hole could well be too far away to be able to interact with us anymore. (because of the expansion)
          Should we consider this black hole to not be real anymore, given it is now outside our light cone and will ever be ?

          In the same way, the wave functions interact with us. (by their “fossils” I was speaking about earlier)
          The measurement of a qbit could create a independence between two part of the wave function of the system composed of the measurement device, the observer, and the qbit.
          Now we can consider the part ending-up independent to us to not exists, just like we can consider everything going out of our light cone to cease to exists.
          But it isn’t a crazy hypothesis to think that no, even if it can’t interact with us anymore, it still exists.

  102. Icedcoffee says:

    I think it’d be helpful if this post were more careful when it used the word “theory” vs. “hypothesis.” These words have built into them a gradient for how seriously you should take the corresponding concept. If someone says “I have a hypothesis that the Sphinx was built by Atlanteans,” you can say “Cool! I hope your research gets funded” and go on with your day.

    (Side note: while part of me definitely wants to see the Atlantean hypothesis validated in some way, the point that strikes me as much more powerful is just how little evidence our current understanding of the Sphinx is based off of.)

  103. Rm says:

    Just wanted to say that the paleorecord gets corrected all the time. Horsetails in the Triassic, feathered dinosaurs, you name it. And it does allow experiment in some cases, such as, for example, the particulars of sedimentation of air particles in moss cushions. (Important for reconstruction of plant communities from pollen records.) Or take the peculiarities of the Carbon isotope distribution in living trees. Rings a bell, doesn’t it.

    Paleontology has lots of tentatively described “species” or even higher taxa and strange unindentified “things” which are yet to be built into the system. It’s not the dinosaurs that should call to Satan, it’s everything else, but somehow everything else just doesn’t attract attention. I wonder if people are just drawn to clearly defined, safe science to play with. Why is it that the untestable things we hear about are always so pretty?

  104. Skivverus says:

    All this talk of MWI brings to mind a hypothesis I’m pretty sure is crackpottery given the whole “no actual physics background”, but it’s at least entertaining and possibly falsifiable crackpottery.
    So, the hypothesis: those Many Worlds we’re not in? They’re dark matter (i.e., the stuff out there that interacts only gravitationally, not electromagnetically), and detectable as such.
    How to falsify the hypothesis: first, assemble an object sufficiently massive (and detectors sufficiently sensitive) that its location can be picked out relative to the background gravitational landscape. Then, send it to one of two locations chosen via a single entangled pair of particles.
    More a proof than disproof, I suppose; one could always claim that the detectors weren’t sensitive enough, that the influence is just smaller than discernible. Still, would set an upper bound.

  105. thomasd66 says:

    Elegance does not strike me as an unreasonable criterion, but it does pose the risk of exchanging one set of black robes for another.

  106. seanbailly says:

    I can’t speak for Baggott of course, but I like to think he would agree with the essence of this blog post. In the article he’s not so much railing against metaphysics, but rather metaphysics untethered from physics: “…there has to be a difference between science and pseudoscience; between science and pure metaphysics, or just plain ordinary bullshit.”

    Baggott complains about the multiverse specifically because it is very popular these days, and it is pure metaphysics. In the Satan/paleontology example, the cosmological multiverse plays the role of Satan. If there are infinite universes where everything can happen, then the explanation to everything becomes: ‘we just happen to live in the universe where fact A is true.’ But that’s exactly like saying ‘Satan did it’. It can literally explain everything, which is what makes it such an empty statement.

    The article would certainly have been better served by trying to differentiate between the ‘good’ and ‘bad’ types of metaphysics. The most important question it raises was: “In the absence of facts, what constitutes ‘the best explanation’?” The response from this blog post is basically ‘Occam’s Razor’, which I would agree with. But “infinite universes” is exactly the kind of explanation that Occam’s Razor would shave off.

  107. seanbailly says:

    I can’t speak for Baggott of course, but I like to think he would agree with the essence of this blog post. In the article he’s not so much railing against metaphysics, but rather metaphysics untethered from physics: “…there has to be a difference between science and pseudoscience; between science and pure metaphysics, or just plain ordinary bullshit.”

    Baggott complains about the multiverse specifically because it is very popular these days, and it is pure metaphysics. In the Satan/paleontology example, the cosmological multiverse plays the role of Satan. If there are infinite universes where everything can happen, then the explanation to everything becomes: ‘we happen to live in the universe where fact A is true.’ But that’s exactly like saying ‘Satan did it’. It can literally explain everything, which is what makes it such an empty statement.

    The article would certainly have been better served by trying to differentiate between the ‘good’ and ‘bad’ types of metaphysics. The most important question it raises was: “In the absence of facts, what constitutes ‘the best explanation’?” The response from this blog post is basically ‘Occam’s Razor’, which I would agree with. But “infinite universes” is exactly the kind of explanation that Occam’s Razor would shave off.

  108. Cassian says:

    The multiverse theory is clearly under-determined by the data. Physicists can work quite effectively without assuming it to be anything other than a heuristic. If “multiverses” were in some sense real, that would add no predictive aspect over the idea that they are just useful.

    But the killer point is this, borrowed from a philosopher of science, ask yourself in what sense a multiverse could be real. It’s not this universe. What’s the difference between calling something “true in another universe” and calling it “false”? Do physicists want a three valued logic – “true”, “false” and “true-in-another-universe-but-not-this-one”?

    What kind of violence would that do to the author of this piece’s attempts to preserve rational discourse? That opens the door to “I know my theory of parapsychology failed testing, but it’s ‘true-in-another-a-universe-but-not-this-one’!”.

    We should allow physicists to change basic metaphysics to fit some data that under determines that change? No way. There is a hierarchy of knowledge. You don’t get to change logic itself, just to fit some experimental data.

    • Skivverus says:

      An analogous third value already exists: it’s called “possible”.

      Occasionally this is prefixed with “physically im-” or “mathematically im-” to denote “false in all universes”.

  109. MarkE says:

    I’m late to this, but…

    The idea that anyone serious is seriously arguing for a post-empirical science is seriously overblown. The scientific method is often (Kuhn?) characterized as imaginative preconception –> hypothesis –> predictions –>empirical testing.

    This model is particularly applicable to fundamental (i.e. “rules of the game”…I know I’m going to annoy some physicists with this usage….sorry) physics. For example, the Dirac Equation or non-Abelian gauge theories were imaginative preconceptions based largely on aesthetic (i.e. what I feel should be, or what would just be so neat) motivations, but which were subsequently found to be empirically correct (discovery of positrons and g~2 for Dirac, or the whole standard model of strong and electroweak physics for non-Abelian gauge theories). Fortunately for both hypotheses, empirical testing was achievable within a human lifespan….but that was nothing more than good luck for us.

    All theories involving higher dimensions are currently struggling through the first three steps of that Kuhnian progression, and until they get to the point of passing the empirical testing phase, nobody serious seriously considers them anything more than appealing hypotheses. Note, however, that hypothesizing additional dimensions can result in testable predictions. As a very simple example (Kaluza and Klein), a five dimensional universe with one spatial dimension being a small circle looks like a 4-dimensional universe to its (large) inhabitants, but five dimensional gravity looks, to the 4-dimensional inhabitants, like 4-d gravity + 4-d electromagnetism, and every elementary particle comes, in 4-d, with an infinite family of ever-predictably-heavier cousins corresponding to the particles having different (quantized) momenta in the compact circular dimension. Note, of course, that Kaluza-Klein theory passes the empirical test that we see both gravity and electromagnetism in our 4-d world, but empirically fails the prediction that particles should come in infinite families of ever more massive particles. You cannot even save the hypothesis by positing a very large mass-splitting within the families (which would correspond to a very small circumferance for the circular dimension) because…chirality. OK, I won’t explain that….but my point is that this is an example where the seemingly fanciful hypothesis led to specific, testable predictions, was falsifiable….and is indeed false.

    The most developed example of a theory stuck in this hypothesis/prediction stage of arrested development is string theory. A particularly appealing version is 10 dimensional heterotic string , with six dimensions forming a compact Kahler manifold of vanishing first Chern class. No I’m not going to explain…just let the words wash over you. Like KK-theory, this partial hypothesis makes a number of predictions that are empirically correct: there is (quantum!) gravity; there are gauge theories to build the force-mediating fields of the strong-electroweak standard model; the particles that feel these forces come (as is true empirically) in a finite number of chiral (that word again….the one that killed KK theory) generations (the number determined by topological characteristics of the 6 compact dimensions) that interact with the force-mediating fields in the empirically verified way. The problem is that nobody believes these qualitative empirical successes sufficient for the theory to be considered proved. The theory is not even fully articulated….because there are many possible distinct candidates for the six-dimensional compact space. Nevertheless, these partial successes are sufficiently intriguing to keep physicists working on string theory, or things that look like it, hoping to be able to tease out something that might become a more decisive prediction amenable to empirical test. They are struggling to find a decisive empirical test—so far in vain….but the program of empirical science is being pursued, not abandoned. Could we imagine an empirical test, even if we cannot actually construct one? Absolutely! String theory appears to have no (continuous?) dimensionless adjustable parameters in the underlying equations of motion…so if we could get a good enough handle on the dynamics that determines the universe in which we live, we might be able to get enough information about the compact 6 dimensions to make numerical (as opposed to the current qualitative) predictions about the 4-d world we experience…for example the ratio of the mass of the muon to the mass of the electron. If the theory could make such a prediction and get it right (we certainly know both those masses quite well empirically)…well, that would be pretty strong evidence that we are really on to something…at least as stunning as deriving the precession of the perihelion of Mercury. Alas….we have almost no idea how to do that….so we flail in that hypothesis/prediction phase, but flail in hope of a breakthrough that would permit the empirical confirmation we all desire so strongly.

    I should add that this really is very much along the lines of Scott’s section IV…. I hope I have explained in some realistic detail how Scott’s “hokey example” is really very close to what is being attempted. There is, however, an optimistic difference in principle between his example and what is being attempted: mostly, the direction being attempted is principle–> theory—> prediction—> empirical success, rather than empirical data –> systematization –> theory –> principle.

  110. trevor says:

    Basically the whole appeal of the mutiverse theory is based on the fact that it preserves the unitary evolution of the Schrondinger wave function. Think of unitary evolution as rotation of an arrow in 2 dimensions where the tail of the arrow is at the origin and the head is tracing a path along a circle over time. Except that unitary evolution is really rotation in an massively infinite dimensional space that is difficult to imagine. In normal QM either Copenhagen or Many-Worlds this rotation is the fundamental dynamics of quantum mechanics. Many-Worlds says that after a measurement happens this rotation continues whereas Copenhagen says an abrupt change happens. If you try to preserve the rotational dynamics and say that is all there is…its pretty difficult without something like Many-Worlds. On the other hand its pretty clear that something abrupt happens in measurement…there is an abrupt change.

    The huge advantage of many-worlds is that it offers to preserve unitary evolution. I feel like the OP did not make that very clear. Otherwise its pretty unappealing. Any probability argument could have exactly the same argument…they could say that all the possibilities in any event where we use probability are in fact realized in different universes. To me that doesn’t add anything…the great appeal is really the preservation of unitary evolution. In a similar way the the great appeal of Special Relativity is that it managed to reconcile the Relativity principle (physics is the same in all inertial frames) with Maxwell’s laws.

    However if you can’t derive the Born Rule or an account of measurement from the Many-Worlds interpretation than I would agree with viVI_IViv…its not really that useful. I also have other objections…I think its ontologically extravagant to use a delightful phrase I found in Sean Carrol’s paper defending it. Even trivial multi-worlds are gargantuan. This was already a problem in just plain QM where one particle require a whole function of 3 dimensions to describe. But when you take tensor product of two particles you get a 6-d Hilbert space. With N particles you have a 3N dimensional hilbert space. For example, for 3 particles this means you need to assign a complex number for every single point a 9-dimensional space. Newtonian physics just required 6 numbers for each particle…it didn’t require a whole function for a single particle and then a 3N dimensional function for N particles.

    One of course could object that is just a property of standard QM…but mutliverse says that this whole hilbert space actually exists! That bothers me. I could get over it if multiverse offered me something in exchange..but it offers me nothing. No new insights. No reduction of principles.

  111. Riftman says:

    “paleontology can better predict characteristics of dinosaur fossils” – this counterexample doesn’t seem quite right. It doesn’t demonstrate that Devil theory has the same predictive power that paleontology has. It demonstrates that for some hybrid “paleontology + Devil” theory.

    But this new theory is rather an example of the “chemistry + Supernova” scenario from your second post on the topic, i.e. “we think Devil theory is worse because it’s less elegant”. Which I’m not even convinced one can make a sound philosophical argument for, BTW: it seems like matter of taste. I’d personally be perfectly fine with Devil in the combo as long as the paleontology part is completely intact.