More Intuition-Building On Non-Empirical Science: Three Stories

[Followup to: Building Intuitions On Non-Empirical Arguments In Science]

I.

In your travels, you arrive at a distant land. The chemists there believe that when you mix an acid and a base, you get salt and water, and a star beyond the cosmological event horizon goes supernova. This is taught to every schoolchild as an important chemical fact.

You approach their chemists and protest: why include the part about the star going supernova? Why not just say an acid and a base make salt and water? The chemists find your question annoying: your new “supernova-less” chemistry makes exactly the same predictions as the standard model! You’re just splitting hairs! Angels dancing on pins! Stop wasting their time!

“But the part about supernovas doesn’t constrain expectation!” Yes, say the chemists, but removing it doesn’t constrain expectation either. You’re just spouting random armchair speculation that can never be proven one way or the other. What part of “stop wasting our time” did you not understand?

Moral of the story: It’s too glib to say “There is no difference between theories that produce identical predictions”. You actually care a lot about which of two theories that produce identical predictions is considered true.

II.

Later in your travels, you come to another land. The paleontologists here believe the Devil planted dinosaur fossils to trick humans into doubting Creation.

You approach the paleontologists and argue the same point you argued with the chemists on your last stop – that if two theories make identical predictions, it’s still important to go with the simpler one.

To your surprise, the paleontologists know and agree. “Of course!” they tell you. “And in the dinosaur theory, there must have been, like, millions or even billions of dinosaurs. But the Devil theory explains everything with just one Devil.”

You argue that it doesn’t work that way, but the paleontologists insist that it does. After all, Occam says not to multiply entities beyond necessity. And if the dinosaur theory posits a billion dinosaurs, that’s 999,999,999 more entities than are necessary to explain all those bones.

Moral of the story: “Choose the simpler of two theories that make identical predictions” isn’t trivial. You actually have to understand some philosophy in order to figure out which of two theories is simpler.

III.

You return home and curl up in front of the fire with a good book on quantum mechanics.

Renowned physicist Sean Carroll jumps out from behind you, and exclaims: “Don’t you realize that single-world interpretations of quantum mechanics make both the errors that you fought against abroad?”

You are startled. “This room is locked,” you tell him. “And how did you know what I was doing abroad? Wait a second. Are you secretly the Devil?”

“Untestable, therefore irrelevant!” says Carroll. You wonder if he has always had bright orange eyes. “But being indifferent between ‘wavefunction branches’ and ‘wavefunction branches, and then somewhere we can’t see it one branch mysteriously collapses’ is the same kind of error as being indifferent between ‘acid and base make salt’ and ‘acid and base make salt and water, and then somewhere we can’t see it a star mysteriously goes supernova’.”

He stomps his foot for emphasis, and something falls out of his pocket. Is that a dinosaur bone? He quickly reaches down and pockets it again.

“And,” he adds “preferring collapse interpretations to many-worlds because there are fewer universes – that’s like preferring the Devil theory to dinosaurs because it involves fewer entities. It’s optimizing over the wrong thing! You’re not literally trying to come up with a theory with as few entities as possible! You’re trying to come up with one that has as few extra moving parts as possible. The process that makes wavefunctions collapse is an extra assumption! Now if you’ll excuse me, I’ve got to go plant this” – he taps the bone “in a sedimentary rock formation in China”. He vanishes in a puff of smoke. Can all quantum physicists do that?

Moral of the story: Applying the two previous morals consistently lets you prefer the many worlds interpretation of quantum mechanics without having to worry about this being “untestable”.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

326 Responses to More Intuition-Building On Non-Empirical Science: Three Stories

  1. Tadas says:

    This post actually helped me formulate what turns me off more and more about this blog (and by proxy, the rationalist community). It’s written about a topic the writer understands at a shallow surface level, by reading popular books. And yet the humility of one’s lack of knowledge is tossed out the window and a “deep analysis” and “informative analogies” follow. It is not rational to discuss such topics without the deep understanding that comes from the study of the subject.
    Rational analysis of whether it’s ethical to divert a train to kill one person vs. killing several? Sure. There is no fundamental knowledge that any of us lack.
    Rational analysis of quantum mechanics without actually learning and understanding the equations and all the precursor math and physics?.. That’s simply arrogance and overreach. I have a Ph.D. in physics. The third part of the blog post made just no sense to me, to the point where I can’t refute it or object to it – it’s like objecting to the word gobbledygook. Looking back at the last year, I am seeing the same pattern in the blog – discussing and analyzing topics which require much deeper background knowledge than the author has. That’s a waste of time and only misleads the reader.
    My analogy: the rationalist community somehow misses the fact that you can discuss the mass of the electron, you can vote on it, reach a community consensus, and go to bed happy. That does not change the mass of the electron.

  2. JohnBuridan says:

    You actually have to *do* philosophy to decide which of two theories is simpler. It’s not a matter of understanding some argument out there, but deploying the philosophical tool belt to adjudicate the situation.

    However, we have to come to understand both theories, their implications, and their purposes quite completely before one can adjudicate between two rival theories. It’s more work than most nonprofessionals are willing to do, hence even the smartest people are surrounded on all sides by epistemic helplessness, and we are left either deferring to experts (because we want to be right) or being skeptical of experts (because we want to avoid being wrong).

    • OldShoeWhine says:

      We have to recognize the human capacity to make idols.

      If you live in a tribe that depends on corn to survive, corn is very important. You will have a corn god. You will have rituals around planting and harvesting corn. And if corn is that important, then the corn god has to exist, it can’t just be a useful abstraction used to organize social form around corn production.

      Likewise, if you are obsessed with math, mathematics are very important. Numbers can not just be useful abstractions used to organize the social form of mathematics, they have to exist, and exist eternally.

      Now, physics, physics is very important and it is the ultimate answer to all those silly people who believe in corn gods and Jehovah, and the abstractions of physics have to exist, and exist eternally, infinitely, and determine everything that happens under the sun, so there is no room for gods and men to subvert the sacred order. Thus, MWI and mathematical theology.

      Its all just forms of idolatry. Sean Carroll might as well be a TV preacher, but he’s not slick enough to make it.

  3. If you want to understand quantum mechanics, I suggest you read the book “Making Sense of Quantum Mechanics” by Jean Bricmont. Spoiler alert: Many-worlds is not how to make sense of it. Besides the (important) fact that many-worlds doesn’t have a clear ontology (i.e., local stuff in space such as chairs), it doesn’t give the right probabilities.

    • I suggest commenters demonstrate their basic knowledge of the subject by answering a few questions:

      1. Have you read “Speakable and Unspeakable in Quantum Mechanics” by John Bell?
      2. What hypotheses are needed to derive Bell’s inequality?
      3. What was Schrödinger’s point with the cat?
      4. What is the point of Bertlmann’s socks?

  4. deciusbrutus says:

    >He vanishes in a puff of smoke. Can all quantum physicists do that?

    Yes, although with probability epsilon.

  5. Craig Falls says:

    Copenhagen:

    1. The wavefunction maps classical states to complex numbers.
    2. The wavefunction evolves according to Schrodenger’s equation.
    3. Probability is assigned to the wavefunction according to the square of the amplitude, i.e. Born’s rule.
    4. When someone (or something?) measures something, the wavefunction collapses along that axis.

    (Some particle interactions are measurements and others aren’t. Which is which is left unspecified. Collapse is nonlocal, nonlinear, and non-unitary, unlike every other part of the theory.)

    MWI:

    1. The wavefunction maps classical states to complex numbers.
    2. The wavefunction evolves according to Schrodenger’s equation.
    3. Probability is assigned to the wavefunction according to the square of the amplitude, i.e. Born’s rule.

    These two theories explain the facts equally well. One is simpler than the other, having simply discarded the 4th rule, while making no other changes. And the 4th rule was the most troublesome anhway, as it wasn’t fully specified, and seemed unlikely to ever admit a full specification. In MWI, what we call measuring a quantum state is just us becoming entangled with that quantum state, the same way any other blob of particles would when interacting with it.

    Having special physical rules for measurement is like having special physical rules for bananas. Both concepts are complex and compound and very human and very unlikely to have any role to play in an ultimate theory. Shouldn’t one instead hope that the behavior of bananas follows from simpler facts — the behavior of its constituent parts? I will be very surprised if the way forward is that someone succeeds in describing exactly which arrangements of particles are or are not a banana, I mean a measurement.

    MWI adherents include Richard Feynman, Stephen Hawking, and Juan Maldacena, so it’s not exactly a fringe theory. It’s just not the one taught in mediocre undergraduate physics classes.

    • deciusbrutus says:

      Even the MWI doesn’t distinguish itself from “Classical deterministic rules, but with factors that we can’t measure, that result in a probability distribution of classical states equal to the observed distribution of classical states.”

    • OldShoeWhine says:

      Anything quantitative can only be quantified by virtue of its relationship to a measuring device. A physical process doesn’t take one second, a physical process occurs, and a clock registers that a second has passed. It is a relation between two things.

      Pretending quantity is something fundamental to a discrete physical system, rather than basically a ratio between discrete physical systems, is pretty silly, especially when to factor in Heisenberg’s Uncertainty, that the means of measurement affects the measured.

      If we eliminate measurement, you haven’t eliminated an “assumption”, you have eliminated the process by which complex mathematical models can be coherently translated into models of empirical reality. It doesn’t make any sense to say that the particle moved from X to Y in time t, unless you have a clock and a ruler (or their equivalent). Sure, we can postulate infinite universes with all values of X, Y and t given, but we haven’t eliminated a brilliant “assumption”, we have eliminated the bridge between pure mathematics and an ability to talk about the real world.

      I suspect the real reason behind the hatred of measurement is that it means that physics is ultimately only a contingent, relational system that depends on human instruments, rather than a substitute for religion. Useful, undoubtedly reveals a real structures at the core of the universe, but it will never be the ultimate or even final answer. [The problem is not the nonspecificity of ‘measurement’, its the nonspecificity of physics, which our mathematical Platonists cannot tolerate.]

  6. OldShoeWhine says:

    My question about the MWI is the following.

    Let’s say it is true, there are all these new universes expanding every microsecond.

    I am currently in one universe line, and will remain in it all my life.

    What physical process explains why I am in this line, instead of the one where I am God Emperor?

    What blows us in the direction of one line or another? The Fates?

    It seems like an attempt to make it appear “all worked out” when in fact, you might as well say the gods determine whether the cat is alive or dead when we open the box. [Perhaps we should leave offerings and prayers so the fates blow us towards better universes?]

    You just substitute the epistemic uncertainty of probability for the ontological uncertainty of not knowing which universe you are actually in.

    • dionisos says:

      I am currently in one universe line, and will remain in it all my life.

      This is the false assumption.

      I had some debate about it here : https://slatestarcodex.com/2019/11/06/building-intuitions-on-non-empirical-arguments-in-science/#comment-818824

      • deciusbrutus says:

        Suggesting that there are two universe lines that interact with each other?

        Or is there a loss of timelessness involved, and “being in the same universe line” at different times is not a transitive characteristic?

        • dionisos says:

          I mean, bounded individualism is false, particularly in MWI.

          If you measure a qbit in state |0>+|1>, you should not really predict to read either |0> or |1>.
          You should either predict you will yourself end-up in a superposition with two independent parts. (which I think doesn’t really make much sense, given there are now two observers and not one).
          Or you should predict you will not read anything. (you aren’t really here anymore, which I believe is the correct interpretation)

          MWI will explain your memory correctly, but I really believe thinking “you” are going “in a particular line” doesn’t work.

      • OldShoeWhine says:

        This doesn’t do anything for me at all. Suppose “I” don’t exist at all, whatever that means. After I die, someone will still be able to write a biography, and certain things will be the case, and other things not the case. [In MW1, say I marry X, in MW2, I am single, etc., my biographer only writes one version.]

        There is a set of statements which is true in my world, such as the date, time, location of my birth, the president at the time of birth, etc., a set of verifiable empirical facts, all the way up to Nov. 22, 2019. There is also going to be an infinite number of worlds created between when I posted on Nov. 22 and now, but there is a new set of facts (say the weather yesterday) that are now the case, but weren’t the case on Nov. 22. [And if they “were” the case then, then there was no way to predict they would end up being the case.]

        What force “blows” me along the line of fate from Nov. 22 to the present? You see, it doesn’t matter if “I” exist or not, it matters whether my options are going to be in money on Tuesday or not. And when I wake up, there will be an answer.

        Its clearly not predictable, even though all the facts of each universe are completely set in stone. Especially since all the physical facts and the laws of physics would exhaust the explanatory powers of nature (they tell me what all the possible universes look like, but not what will be the case when I check next week). It pretty much has to be some kind of metaphysical force approaching a god-like energy.

  7. Eli says:

    Wat?

    An extra star somewhere going supernova is not merely an untestable extra prediction. It’s an untestable extra *causal mechanism*, and therefore penalized.

    Likewise, the supposed one devil planting billions of dinosaur bones is still billions of things happening – the bones themselves – by a single common cause, the devil. That’s one causal mechanism explaining billions of effects, which ought to leave a correlation between them. We don’t observe a correlation implying a single common cause, but instead a vast variation. The devil would have had to be faking evolution *while thoroughly understanding evolution*, so we’re already supposing the causal mechanisms of evolution are *there*, but they’re mediated by the devil’s brain in this case. So again, an extra mediator is an extra causal mechanism that solely mediates, leaving no evidence for itself, so we get rid of it.

    Versus the many worlds, which are a completely acausal mechanism, and therefore subject to metaphysical debates.

  8. bpodgursky says:

    To me, evaluating whether the many-worlds theory is true or not — whether the other waveforms collapse — feels kinda like asking:

    “Does the 13,451,314,999,311,121th Fibonacci number exist before we compute it?”

    Like, sure? We can get there, and there’s a deterministic path to getting there. Asking whether it actually doesn’t happen, or whether it happens somewhere we can’t see, doesn’t seem meaningful. Trying and failing to find a better way to describe my feelings here.

    • deciusbrutus says:

      “Does the third-simplest Turing machine that has no proof of haltingness halt?”

      Asking about other portions of the waveform is only possible indirectly, because in order to observe them we would have to be there. There’s no reason to believe that things are different there, because we have to reason to believe that there is a there for things to be.

  9. Johnny4 says:

    I don’t think this point has been made elsewhere, but in philosophy it is generally recognized that there are often tradeoffs between “ontology and ideology”: you can get a simpler ideology (minimize bruteness/things that can’t be defined or explained) by expanding your ontology (expand what you think exists), or you can get a simpler ontology by expanding your ideology. A famous instance of this tradeoff is in how people think about possibility and necessity, and there’s actually an interesting parallel to the MWI. If you take the logic of possibility and necessity (“modal logic”) totally straightforwardly, it looks like we end up with a plurality of (existing) alternate worlds containing alternate versions of everything. In addition, if you interpret things this way, you can simplify your ideology by defining ‘possible’ as ‘true in some world’ and ‘necessary’ as ‘true in every world’. So you have an expanded ontology but a simplified ideology. Many people think that these (existing) alternate worlds are ridiculous, and various problems arise about the nature of identity and morality given the existence of all these alternate worlds. So other philosophers give a slightly more complex “interpretation” of modal logic, but more importantly they (arguably) have to take the idea of possibility as primitive, or brute. (Possibility and necessity are interdefinable, so only one needs to be taken as brute.) They have a much simpler ontology but a more complex ideology. As far as I can tell, it looks like the debate about MWI has a pretty similar structure: MWI buys a simpler ideology with an expanded ontology, plus some puzzles about identity and morality. Rejecting MWI forces you to adopt a more complex ideology (a theory of collapse). Some of you might be thinking, “Hey, maybe these multiple/alternate worlds can do double duty, simplifying the ideology of modality and physics!”. But really the “worlds” these two theories talk about are not the same thing at all, so you can’t get two for the price of one by believing in a multitude of worlds.

  10. cj says:

    Imagine a counterfactual world wherein “the Devil planted all the dinosaur bones” really were the best explanation for the existence of dinosaur bones. (In this counterfactual world perhaps biology is a tad different and we have knowledge that circulatory systems won’t work in creatures of that size, or that the atmosphere wouldn’t have supported such life.) Were that the case, I would instead be forced to believe that dinosaur bones do not actually exist, and are instead a mass hallucination, because it’s actually two propositions: “1. they do exist, and 2. the devil planted them”, and if #2 is the only thing that can explain #1 then obviously #1 is not true.

    That’s basically how I view quantum mechanics, as a philosophy reader who is a relative layperson on the physics (I had 2 semesters of calculus-based physics, and we did cover this, though in limited fashion). If the multiple-worlds explanation really were the best explanation for quantum mechanics, then I would be forced instead to believe that quantum mechanics itself is a mass institutional hallucination in the same way the aether theories of the 19th century were, and that the entire study is a gigantic wrong turn that everybody has been building upon for decades in error. Because multiple worlds is ridiculous a priori. The Schrodinger’s Cat thought experiment proved the Copenhagen interpretation to likewise be a priori ridiculous decades ago, and I’m not going to re-examine my priors when one of those priors is as basic as the Law of Noncontradiction. (And if they want to tell me that in their model “Object X is at location Y” is not a well-posed statement, then I question the allure of such a silly model.) I’m still waiting on somebody to come up with a philosophically sound explanation for quantum mechanics, and until I see that, I’m considering it to be Aether 2.0

    • sclmlw says:

      The existence of a luminiferous aether is an interesting example to use here, because I think it is exactly counter to your point that something should make sense before it is believed. Although I tend to share your intuition that interpretations of QM, including MWI, should be held in reserve until we gain experimental confidence in them, the postulation of the aether was a basic extension of logic at the time it was being considered. Indeed, without weird new experiments coming up with strange, unintuitive results, you’d struggle to explain to a physicist at the time why we should prefer anything except a theory that includes the aether.

      Light obviously acts like a wave based on experimental results. All waves have a medium that they propagate through, ergo light must have a medium by which it propagates through, especially through the vacuum of space while it travels from the sun to us here on Earth, or across the vast reaches of the universe from far-distant stars. Ergo, we will postulate that light – a wave – has a medium in propagates through – like any other wave. Indeed, the strange thing to do at this point would have been to postulate that light was somehow a special kind of wave, unlike all the others. If you talked to one of the physicists back in the day – using only the experimental evidence available to them – and said the idea of an aether was a joke, they’d naturally counter by asking how you explain how a wave like light can propagate through the vast empty spaces of the universe. Indeed, I’ll take this opportunity to nerd-snipe all the non-physicists on the forum and ask them to do just that.

  11. Roebuck says:

    Anyone remembers the xkcd comic about green beans?

    I think it illustrates the problem very well. The theory “green beans cause cancer” is simple and very accurate, but we know that the problem here is that there is a class of theories which say that certain bean colours cause cancer and for the theory competition the most fitting one will be chosen.

    The theory “You’ve got a p<0.05 on one colour of beans by mistake" shouldn't be compared to "green beans cause cancer" but rather to "I will look at what colours of beans got p<0.05 for cancer and declare them cancerous". Just like the Devil, the green bean cancer theory can be easily swapped for a similar one that fits different evidence and therefore it should get a disadvantage in the theory competition.

  12. mcpalenik says:

    Except the many worlds interpretation doesn’t connect the amplitude of the state vector to probability in any meaningful way. It’s intended to preserve the linearity of quantum mechanics but ironically, you don’t get probabilities out of it unless you introduce additional nonlinearity. I apologize in advance for disappearing after posting this but I don’t feel like discussing this with random people on the internet yet again. I’m just annoyed at how often this gets overlooked (and yes, there are a few papers trying to argue that you actually can get probabilities out of MWI, but the arguments involve a lot of hand waving and don’t really add up).

    • Craig Falls says:

      In both theories the probability is proportional the square of the amplitude, just by definition. I’m not sure why you think it’s meaningful when Copenhagen asserts that and non-meaningful when MWI asserts that. That part of the theory is invariant. The part that is being changed is collapse. Copenhagen has it; MWI simply discards it, leaving all else unchanged.

      • mcpalenik says:

        Probabilities don’t emerge from MWI. This is really well known among physicists (I’m a physicist. PhD plus 3 years as a postdoc and currently employed as a theoretical physicist at a government lab). You seem look this up yourself, a cursory search on Google for “probability in many worlds interpretation” will explain the sense issues MWI has with probability. I can’t really explain it here in a reasonable amount of words except to say that there is no known mechanism for probabilities to emerge from MWI without some additional thing called on. In that way, it doesn’t offer much more than the Copenhagen interpretation.

        Edit: I’ll make one minor clarification. In MWI, the diagonal elements of the density matrix essentially become their own worlds, because the off diagonal elements approach zero, so the argument is, there’s no interference or interaction between the states in the diagonal. The problem is, each divisional element is a world, regardless of its amplitude. So if the amplitude is really high (high probability) you get one world. If it’s really low, you still get one world. The born rule doesn’t emerge from this picture. There are nonlinear modifications of MWI that reproduce the born rule, but if you introduce nonlinearity, there are ways to reproduce the born rule without many worlds.

  13. Markk says:

    Peter Woit noticed your previous post and commented at his blog.

    [I]f theorists had a simple, elegant multiverse theory with lots of explanatory power, you could get into interesting arguments about its testability and whether the idea was solid science or not. The problem is that no such multiverse theory exists. If you want to talk about the MWI multiverse, your problem is that solving the measurement theory problem by just saying “the multiverse did it” may be “simple” and “elegant”, but it’s also completely empty.

    • Craig Falls says:

      Funny, I would put it the other way around:

      [I]f theorists had a simple, elegant collapse theory with lots of explanatory power, you could get into interesting arguments about its testability and whether the idea was solid science or not. The problem is that no such collapse theory exists. If you want to talk about collapse, your problem is that solving the measurement theory problem by just saying “collapse did it” may be “simple” and “elegant”, but it’s also completely empty.

      The only difference is that no one argues that the extra postulate of collapse is simple or elegant.

  14. bagel says:

    Moral of the story: It’s too glib to say…

    Glib is my middle name.

    “There is no difference between theories that produce identical predictions”. You actually care a lot about which of two theories that produce identical predictions is considered true.

    If you make hypotheses that are not testable in principle, you don’t get to stand on the Popperian soapbox.

    “Choose the simpler of two theories that make identical predictions” isn’t trivial. You actually have to understand some philosophy in order to figure out which of two theories is simpler.

    Occam’s a chump; not all entities are made equal. If you propose an entity whose actions and capabilities can’t be falsified in principle, you don’t get to stand on the Popperian soapbox.

    “Untestable, therefore irrelevant!” says Carroll

    If you don’t already read Dresden Codak, you might enjoy it.

    • bagel says:

      To say it another way, the first two parables ignore the burden of proof. They both dance cleverly around the fact that they have proposed unfalsifiable objects and then demanded that someone else disprove them, without actually submitting evidence of their own.

    • If you don’t already read Dresden Codak, you might enjoy it.

      There’s some evidence that Scott is already familiar with it.

  15. Hamish Todd says:

    When I read the sequences I read, I learned many medium-sized interesting things presented in Yudkowsky’s entertaining and clear style. And I learned one ginormous thing: that between Bayesianism and “maximum entropy”, all the problems of epistemology that I care about have been solved.

    In this post you say “you have to learn some philosophy”. I would change this to “you have to learn some maths”. Sure, philosophy works, including here, but being quantitative about things allows you to go further and be very precise – precise enough to invent the basis for digital signal processing, or to write an algorithm that will output the ideal approximation of a straight line that should be drawn through a set of data points.

    The maths that you have to learn is called “Information entropy”. It was originally discovered by Claude Shannon as a way of making televisions better, and was picked up by Jaynes (who influenced Yudkowsky) as the basis for his philosophy/approach to science. If scientific theories are precise enough, their information entropy can be calculated. Of two theories, the one with higher entropy is an objective and mathematical sense more likely.

    Something that looks like “philosophy” might creep in to the way that theories are compared. But I think it can be gotten rid of, because good theories can be formalized into probabilistic statements about predicted experiences. Thinking of all the reasons that the devil explanation in story 2 is worse than fossils, I think they could be formalized fairly easily? Furthermore, if Friston is to be believed, in the same way that much of the brain is probably bayesian, much of the brain is also trying to find maximum entropy explanations (unless it is prioritizing things other than truth seeking).

  16. michelemottini says:

    The wave collapse is not an extra assumption: it is an indispensable part of quantum mechanics – without it we cannot actually compute what happens. If you study quantum physics you are going to study it and apply it – all the time. It is in all text books: ‘Immediately after the measurement of an observable A has yielded a value an, the state of
    the system is the normalized eigenstate |ani.’ (http://web.mit.edu/8.05/handouts/jaffe1.pdf)

    Interpretation of what this could mean or how it can be explained are not theories – that’s why they are called interpretations. So discussing how / why we should prefer certain theories over others is totally immaterial when speaking of interpretations – there is not really a way to choose one over another within the framework of scientific theories, because they are not theories. It is a matter of taste.

    • MicaiahC says:

      I don’t think claiming that it’s an indispensable part of quantum mechanics is true pedagogically speaking.
      (Anecdote alert)

      When I was taking undergraduate QM and had not been exposed to Everettian interpretation, I was constantly confused by what collapse is and what constitutes a measurement, and all of my classmates were too (one of the *most* asked questions was: What is collapse? Why would what measurements do be reflected in the physics in this manner?). After I was exposed to this, a lot of my uncertainty vanished, and I was able to gain a pretty good intuition about what the behavior of quantum systems are and which parts of the formalism were important, whereas my classmates still seemed fairly hung up. In fact multiple professors have stated that every class they have are extremely hung up regarding what collapse means.

      The difference in interpretability of QM *remained* true until fairly late in graduate school, even when my ability to handle the math involved quickly deteriorated due to gaps in my math prerequisites, when the various alternatives to wave mechanics were presented and my classmates (who came from different undergraduate institutions) would just go ahead and do the formalism and be unable to relate the formalism to experimental setups.

      If it really is just a matter of taste, or that collapse is indispensable why is it that collapse seems to be such a pedagogical obstacle? There are *tons* of counterintuitive and weird things in standard physics, like the Lagrangian formulation of mechanics and turbulence in general, so there’s no a priori reason to suppose that it’s *just* a matter of weirdness of mathematical complexity.

      Edit: fixed some mispellings + sentence structure

  17. zzzzort says:

    I’m tempted to replace the words ‘non-empirical science’ with ‘philosophy’ and go on our way.

    Instead of conceptualizing ‘science’ as an algorithm for choosing hypotheses given data, we can instead think of a more kuhnian science as a community and science as a pursuit. And here I think it makes sense to ask, which hypothesis is more useful to science? The majority of working physicists I know are in the shut-up-and-calculate camp. They have opinions about which interpretation is better, but practically it doesn’t matter if you’re doing QFT. The copenhagen interpretation is pretty easy to teach and apply to experiments. MWI gives some nifty intuition about how quantum computing works.

    Note that non-predictive elements are often useful. In classical electromagnetism, potentials are not physically observable, and not believing that they exist will yield the exact same predictions. But fields make calculations a lot easier, and improves intuition. Likewise, in examples I and II the more useful (easier to teach, easier to remember, more intuitive) is the ‘correct’ one. The devil may or may not be simpler in an absolute sense, but he doesn’t do anything useful in that theory.

  18. I. We haven’t observed the supernova doing this, and there is no mechanism to link the supernova to a local chemical reaction, whereas the chemical reaction itself has relatively well investigated reasons and proof behind it. It’s not expectation. It’s about not saying things you can’t back up. We could jump beyond the scientists and also say that a whole new Earth is also created equidistant in the opposite direction, and they’d have no reason to dismiss it. We could start talking about stuffed bears appearing beyond the cosmological horizon, and they’d have to start appealing to empiricism at some point if they wanted to stop the acceleration into madness.

    II. This is a flaw with Occam’s Razor. It’s better to point out that dinosaurs are more plausible because while they mean many new elements, the characteristics proposed for the concept “dinosaur” are those which exist in pre-existing animals plus those which fit within already known mechanical limits. If you see a load of skeletons that look like animals, even if slightly exotic ones, then saying that the creatures who made these fossils were animals (given the pre-existing proof of biology, its limits, and lineages connecting their anatomical features to birds), is more reasonable than to say they were faked by an intelligent being which no other evidence points to having existed at the time, not to mention any other physics breaking characteristics that are wrapped up in the concept of “Devil”.

    III. I think this is a problem created by science needing to refer to things it does not have information about, with a clear designation. Calling up the idea of a “wave function” doesn’t necessarily add anything to what we observe. It’s just a way to organize our thoughts and allow us to investigate new ways to manipulate what we are observing. Similarly, we have the concept of dark matter in cosmology to mark the question of how galaxies rotate so fast but still stay together. Dark matter really just means “we don’t know what’s doing this, but we’re hoping it’s some kind of matter, because at least then we have a chance at detecting it”. You try get funding for a Devil Detector.

  19. ksvanhorn says:

    I don’t agree that MWI is untestable. If MWI is true then there is no fundamental limit to the size of a coherent quantum system, just engineering limits. If “spontaneous collapse” theories are true, then there is a limit. And these are really the only two viable alternatives.

    The Bohmian interpretation is out of the running because it’s non-relativistic. Proponents have been trying to make it relativistic, and failing to do so, for decades. If they ever succeed, and can deal with Quantum Field Theory, then we can start taking this possibility seriously.

    QBism is out of the running because it claims that the wave function is purely epistemic, representing a state of knowledge, but state of knowledge *of what*? QBists just tell you not to ask that question.

    Copenhagen is out of the running because it’s fundamentally incoherent. You have to engage in this doublethink where QM is a fundamental physical theory that governs the behavior of atoms, and yet there still exists this classical world in which you can have a classical measuring apparatus and classical observer that can interact with a quantum system, causing its wave function to collapse… even though the measuring apparatus and observer are themselves both made of atoms whose behavior is governed by QM.

    • Craig Falls says:

      Yeah it’s not really clear what exactly the Copenhagen folks believe.

      It feels like a mott and bailey trick. The mott is “shut up and calculate”. The bailey is “the world is classical except inside our experiments and all the quantum stuff goes away when we measure it and there’s definitely only one universe” or something like that.

  20. andagain says:

    Applying the two previous morals consistently lets you prefer the many worlds interpretation of quantum mechanics without having to worry about this being “untestable”.

    It’s not obvious to me why this means I should prefer MWI to, for example, the Transactional Interpretation: link text

    • ksvanhorn says:

      I’ve never been able to make any sense of Cramer’s Transactional Interpretation, despite really wanting to because it sounds like such a cool idea. I keep on hoping that if I keep reading more of what he’s written he’ll clarify the vague explanations he gives, say exactly what he means, but it never happens. How does it even deal with entanglement? All the discussion and examples are of wave functions of single particles.

  21. rwzerum says:

    1. The “no testable difference” principle just means we don’t believe things for no reason. There’s no reason to believe the supernova part of the chemistry example (and indeed nobody does believe such a thing).

    2. The reason the Devil theory isn’t “simple” is because (A) people no longer believe in Christianity (B) even if you do, it doesn’t make sense. If there were good reason to believe in Christianity as well as God permitting Satan to arrange things like that, it would be “simpler.” So yes, “simple” is an heuristic that takes some expounding upon, but people more or less understand what it means.

    3. MWI is something that there’s barely any good reason to believe in, and it isn’t “simple,” because it doesn’t even explain the one thing it’s been invented to explain.

    This QM hair-pulling is because present-day thinkers are dead set on thinking that consciousness is “woo.” Ignoring the fact that everything you know, everything that *is*, for you, is a subjective experience had from a first-person viewpoint is “woo,” so there absolutely must be a coherent, abstract, third-person way of describing everything. This belief that there’s some quasi-spiritual religious hazard in acknowledging the one obvious fact of your existence will keep this debate lively for probably another hundred years.

    Not that acknowledging the inherently first-person nature of your existence solves the problem, but it may eventually shift the hair-pulling to a different topic.

    • Iago the Yerfdog says:

      I’m beginning to think that “the simplest explanation” is simply not a well-defined concept. What it means will depend on context.

      EDIT: Yes, I know about “minimum description length,” but its problems strike me as insurmountable.

  22. LGS says:

    Scott, I don’t think you understand the primary objection to MWI.

    It’s this: how much evidence would it take for me to convince you that you don’t exist? If I had a beautiful, elegant theory that explained everything, but had the unfortunate consequence of conclusively demonstrating that you, the reader/observer, are not conscious, would you believe it?

    That’s more or less the bullet MWI asks you to swallow. It says that there’s not a “you” anymore; there’s infinitely many. That by itself might not be a problem if you could just be one “copy” of the yous – but no, MWI interpretation says you’re a soup of all of them at once, not separable into individual copies. You’re a complex linear combination of the yous, not just one of them. It does not make sense to ask “if I see a red dot now, will I see a blue dot after the unitary is applied” – because there’s no individual “you” in just one world, there’s an inseparable soup of yous. You see both red and blue, in a complex linear combination.

    Here, a quote from Aaronson from the link above, talking about the perspective of a single “you” in MWI:

    Look, we all have fun ridiculing the creationists who think the world sprang into existence on October 23, 4004 BC at 9AM (presumably Babylonian time), with the fossils already in the ground, light from distant stars heading toward us, etc. But if we accept the usual picture of quantum mechanics, then in a certain sense the situation is far worse: the world (as you experience it) might as well not have existed 10^(-43) seconds ago!

    The MWI debate was never about whether multiple worlds is “simpler” than one world.

    • Iago the Yerfdog says:

      I’m glad there are starting to be some more responses like this. MWI may well have every virtue you could ask of a scientific explanation… until you think about what it would be like to actually think of yourself as living in an MWI world.

      That isn’t per se an argument against it being true, but if I have to ignore MWI’s practical implications every time they are relevant, how is my belief in MWI anything more than lip service?

      • Shion Arita says:

        I haven’t experienced anything that I would find inconsistent with MWI.

        To go to the simple case of it let’s look at me measuring the spin of an electron. There are a lot of interactions omitted in this model (like the detector with electron, me with the signal from the detector etc.) but that doesn’t change the important part of what happens:

        before the measurement the math says we have:

        |Me_initial>(|e-_up> + |e-_down>)

        and after it we have:

        |Me_up>|e-_up> + |Me_down>|e-_down>

        I really don’t see how there’s any way to interpret this other than that before the measurement i am not correlated with the electron, and after the measurement there are two noninteracting mes that are correlated with the electron in its respective states. Since they don’t interact, and internally see one history as happening, that lines up with what I experience as well as anything can.

        For this not to be the case, you’d have to say instead that, after the measurement, one of those terms in the equation just goes away completely randomly, for… no real reason. At least to me, that’s a MUCH harder pill to swallow than my present having multiple noninteracting futures.

        • Craig Falls says:

          Exactly. I wish one of the Copenhagen adherents would respond to one of the posts like this one so I could maybe understand how they’re thinking.

          • LGS says:

            Did you try reading my link above? Here it is reposted:

            https://www.scottaaronson.com/democritus/lec11.html

            Basically, the whole problem starts with the assumption that the multiple “mes” are non-interacting. According to quantum mechanics, they could interact in the future. If you think they fundamentally can’t, then not only are you a Copenhagen adherent, you’re a real-collapse theorist: you’re making an empirical prediction that human-scale objects cannot be put in coherent superposition. I assume you do not, in fact, make this prediction, so let’s drop the “noninteracting” word, which is going to be false in the future.

            So there are multiple “mes”, and they CAN interact. Now, if I’m “one of the Mes”, and someone is about to apply a unitary to me, how should I think about this? Can I do Bayesian reasoning to calculate what I’m likely to see? The answer is no: conditional probability is undefined when you’re just “one of the mes” (that concept doesn’t really make sense in quantum mechanics). It’s literally undefined to ask what the chance is that I’ll see a blue dot later given I’m seeing a red dot now; click the link above!

            Quantum mechanics does not give you the option of being merely “one of the mes”. You are all of them mixed in a complex linear combination. Talking about only one copy does not make much sense, not unless you freeze time a consider only a single frame of the universe (as Shion does in the parent comment).

          • Harry Maurice Johnston says:

            @LGS, I don’t see how the choice between Copenhagen/MWI affects the question of whether it is possible to put a living human into a coherent superposition for any measurable length of time. (Personally, my prediction would be that it is not.)

            … granted, as Robert and I were discussing earlier, there is some confusion in my mind about what decoherence actually is and whether or not it really has anything to do with measurements, but even so, taken as an engineering question, it seems to me that it should be thermodynamically impossible to deliberately put a human into a coherent superposition, in the sense of actually seeing anything equivalent to an interference pattern. I have the feeling this isn’t really what you mean?

          • LGS says:

            @Harry, if you believe that it’s not possible in principle to put any kind of observer in coherent superposition (whether human or cat or futuristic AI or anything else you consider conscious), then what you’re suggesting sounds closer to a real-collapse theory; effectively you’re saying the world isn’t governed just by quantum mechanics, but also by the additional rule “big things can’t be put in coherent superposition”. As this is not really standard QM, it’s neither Copenhagen nor MWI.

            The whole *point* of interpretations (especially the Copenhagen one) is worry about what it means when you yourself are put in superposition. If you want to claim that’s impossible, there’s not much point debating MWI vs. Copenhagen; the answer may indeed boil down to which is the “simpler” theory, but now you need to account for the simplicity of the “big things can’t be put in coherent superposition even in principle” part of your theory as well, which will likely dominate the complexity penalty of the whole story anyway.

          • Harry Maurice Johnston says:

            But there’s no requirement for an extra rule, that’s my whole point. The typical human body has about 7*10^27 atoms, and needs to be kept warm; how are you possibly expecting to force it to remain coherent?

            For purposes of comparison, this achievement required temperatures below a tenth of a kelvin.

            MWI assumes that humans can be in superposition, but not coherent superposition.

          • LGS says:

            @Harry, “how could you possibly expect” is a practical consideration, not a theoretical possibility argument – and to some extent so is the phrase “coherent superposition”.

            The different worlds should *statistically* interfere/collide with non-zero probability, even for a warm human body; the probability is just really small. And after a long enough amount of time (exponentially large, much past the death of the universe), the amount of different configurations of matter in the universe will be exhausted, and at that point all the different worlds will interfere with mathematical certainty.

            Then there’s the question of whether you can implement a conscious observer on a computer. If you can, then in principle there’s nothing stopping you from running that implementation on a *quantum* computer, and viola – an unambiguously coherent superposition of conscious brain states. Are you saying computer programs can never be conscious no matter what, as a matter of principle?

            So I don’t think saying “the worlds never interact” is a satisfying philosophical position. It forces you to ignore the current interaction that’s taking place with extremely small probability; the interaction that’s guaranteed to take place after the death of the universe; and the interaction that can be made to take place if you’re willing to assign consciousness to computations rather than only to warm human bodies. That’s a LOT of bullet-biting snuck in without pausing to consider them! At the very least I’d like the MWI proponents to *mention* these and *say* they’re biting all these bullets, rather than pretending they don’t exist.

          • Harry Maurice Johnston says:

            @LGS, well, you did say “empirical prediction”, so I think the practical considerations are more relevant than the theoretical possibilities. I suppose that means I had misunderstood what you were trying to get at. (As I see I had already predicted!)

            FWIW, I really was just objecting to that one point, not trying to argue in favour of MWI. That said, as long as I’m nitpicking, why stop there? 🙂

            The different worlds should *statistically* interfere/collide with non-zero probability, even for a warm human body; the probability is just really small.

            But since this would (presumably) have no perceptible effect on our consciousness – it would be indistinguishable from the thermal noise that is always present – we have no evidence that it doesn’t happen.

            We also (obviously) have no evidence on what will or won’t happen long after the death of the universe.

            Then there’s the question of whether you can implement a conscious observer on a computer.

            I don’t see how this would be a problem?

            I mean, I suppose that if you’re a quantum consciousness that doesn’t know it is a quantum consciousness, and interacting with a quantum experiment in a coherent way when you thought you were making measurements in the Copenhagen sense, you might get inconsistent results. Particularly if someone was messing with your brain as described in the first part of Scott Aaronson’s article.

            But even in an entirely classical universe you might experience inconsistent outcomes if someone is experimenting on your brain! So I don’t see this as surprising.

            So I don’t think saying “the worlds never interact” is a satisfying philosophical position.

            Ah. Well, if you interpret MWI as making that claim, I can certainly understand your objections. However, I think that’s a misunderstanding of what MWI says.

    • smack says:

      I think this is a very important point, and I *do* think it has scientific force, when coupled with the kinds of points that people like Michael Polanyi made about science.

      Which is this: far more of science than we ever imagine rests on our every day, common-sense knowledge about the world, our “how to ride a bike” knowledge instead of our “how to add 2+2” knowledge. What an experiment means, how to read a dial, which things to write down, which things to ignore, what it means that “a scientist” “performed” “an experiment,” and so on. MWI threatens to overthrow the intuitions on which all these things are built; it thus threatens to overthrow the very observations on which our confidence in QM rests.

      I don’t think this is the very strongest argument against MWI, but I think it’s a nontrivial one.

  23. Machine Interface says:

    And yet the pilot-wave theory has analogous macroscropic systems.

    • doubleunplussed says:

      Eeeeh. Extremely tenuously and for single-particles only. It aint single-particle quantum mechanics making people argue about interpretations.

  24. Viliam says:

    After reading too many debates on this topic, seems to me that they lead nowhere, because you can do both Copenhagen and Many-Worlds the wrong way, or the right way. But instead of focusing on the details of the right way vs the wrong way (which perhaps are so obvious to some experts that they never bother to explain them), people focus on the keywords which can serve better as rallying flags.

    How I see it:

    If you believe in a collapse as a magical event which instantaneously destroys branches of the universe, you are doing it wrong.

    But there is a version of many worlds, where you have a “split” which instantaneously creates branches of the universe, and that makes exactly the same mistake.

    (Did I just make up the second one? Let me quote jermo sapiens in this thread: “When the wavefunction interferes with itself it has not collapsed yet and therefore there is no universe-split either.” See, there is a collapse and a universe-split in the same sentence. Should we call this a Many-Worlds-After-Collapse Interpretation?)

    On the other hand…

    If by “collapse” you mean that the entanglement between an outcome and alternative outcomes gradually becomes negligible (so at some arbitrary point you can simply stop thinking about the alternative outcomes), you are doing it right.

    And if by “many worlds” you mean that the entanglement between various outcomes gradually becomes negligible (so at some arbitrary point you can simply think about them as happening in different worlds), you are doing it right.

    The only thing to debate here is the difference between “does not exist” and “exists, but in a different world”. Even this difference could disappear if we would simply admit that existence is relative (i.e. that whenever someone uses the word “exists” in a seemingly absolute sense, it actually means “exists, relative to me, and anyone I am talking to”). Then, even according to the MWI, the other worlds do no longer exist from our perspective, and we no longer exist from theirs.

    There is no collapse
    And there is no universe-split
    The evolving wavefunction gradually decoheres
    The mixed states are gone, gone, no longer measurable, wake up, game over

    • uau says:

      If by “collapse” you mean that the entanglement between an outcome and alternative outcomes gradually becomes negligible (so at some arbitrary point you can simply stop thinking about the alternative outcomes), you are doing it right.

      I don’t agree with you here. IMO this is not too sensible a view of Copenhagen. If you don’t consider the effect of alternative branches an exact and absolute zero (with no room for gradualness like “in practice can stop thinking about it” or anything like that), aren’t you essentially using a many-world interpretation?

      “I see the cat is alive. There’s a version of me who’s now burying its dead body, but his influence on anything I’ll do in the future is very very small, so I can safely stop thinking about him.” – this really doesn’t sound like a non-many-worlds view!

    • Iago the Yerfdog says:

      I had a similar thought after posting my comment: what if the wave-function is quantized such that after a point the probability of less-probable branches rounds to zero and those branches disappear? That would mean that in any given moment there is some “fuzz” (or as David Chapman would say, “nebulousity”) to the current state of the universe but that sharply-diverging branches are squashed fairly quickly. This could get around my objection to MWI.

      The problem would be that (a) this quantization is an additional complication, and (b) branches where intelligent life exist are arguably all outliers that should have been squashed under this interpretation.

    • jermo sapiens says:

      Let me quote jermo sapiens in this thread:

      FTR, I dont know what I’m talking about. I have a vague understanding of something and it is probably wrong and I’m just spitballing to see if others can correct me.

    • Craig Falls says:

      I think you believe in MWI. No one believed what you describe as the “right” version of Copenhagen before Everett came along. You can tell because they thought entanglement was weird, rather than thinking it was the norm. And you can tell because they were surprised to find out that distant measurements could be correlated in non-classical ways, and yet information couldn’t be transmitted faster than light. All this is obvious in either of your “right” interpretations, both of which most people would call MWI.

      • ec429 says:

        I’ve been forming the opinion for a while that Copenhagen is the dragon in your garage: Copenhagenists can always find an answer for ‘when collapse happens’ in this particular experiment so that they get the right result, but at some point a theory of ‘collapse always happens, but never until it’s too late to show up’ (even on a delayed-choice quantum eraser that sends some of its photons via a mirror on Proxima Centauri b) starts to look awfully like you don’t really believe in collapse. And yet, that’s exactly the theory being proposed by anyone who argues “MWI isn’t science because it makes no predictions!”

  25. Jacob says:

    Re: example 2: “The ‘simplest’ explanation is that the lady down the street is a witch and she did it.” – Robert Heinlein (Note: the only sources I can find for this quote are rationality blogs so maybe not an actual quote but it expresses the sentiment nicely)

    Re: actual substance:

    As far as I can tell, the best argument for the Everett/Many-Worlds interpretation of QM is that the Copenhagen interpretation is stupid. This isn’t exactly a strawman (since Copenhagen was/is a super-commonly held belief) but it follows the same flawed reasoning. If option A is stupid, and option B is stupid but makes fewer assumptions, maybe we keep looking rather than declare option B to be true? Personally I’m a fan of Relational Quantum Mechanics. It’s weird and hard to understand, but in a universe where we know relativity and QM are both true it doesn’t seem to add any additional complexity. Or maybe it does, but in a reasonable way that we expect of new physical theories, and it doesn’t add infinite universes. There are plenty others to pick from: https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics

    • Iago the Yerfdog says:

      If option A is stupid, and option B is stupid but makes fewer assumptions, maybe we keep looking rather than declare option B to be true?

      Exactly this. At some point, the correct option is to shrug your shoulders. We do this all the time with other questions, like “Why is there so little anti-matter?”

      I suspect the reason this wasn’t done in QM is that actual math is so complicated and counter-intuitive that no one would believe them without some simplified explanation, but then new generations started taking those interpretations too literally.

      • zzzzort says:

        Baryogengesis is very much an active topic in physics research. It is also a lot more of a well formed physics (rather than philosophy) question, a lot of people do model building that is exquisitely sensitive to experimental constraints (EDM is a cruel mistress), and occasionally they new predictions testable in collider or astro experiments.

    • Thegnskald says:

      Er. What do you think the difference between relational quantum mechanics and MWI is?

      As far as I can tell, the difference is entirely based on the… rather inaccurate description of MWI propagated in popular science.

  26. mustacheion says:

    I have an MA in physics, and attempted to specialize in field theory in grad school before moving to semiconductors, because the math involved in field theory was a little beyond me. I am completely baffled by this rational-sphere MWI debate; either my understanding of physics is totally wrong, or Eliezer, Carrol, and apparently Scott’s understanding of physics is totally wrong. So I would like to hear from other degree holding physicists – what is the accepted mainstream physics cannon?!?

    Here is my understanding of physics:

    I think we can all agree that the universe does not run on Newton’s Laws. No part of the physical universe actually obeys f = m a, it actually runs on relativistic mechanics (ignoring the whole quantum side at the moment). But of course, f = m a is a really accurate simplification that gives great results most of the time, and is much easier to use than relativistic mechanics, and so classical mechanics is a really useful tool in the arsenal of a modern physicist.

    As I understand things, the exact same thing is true for Schrodinger’s equation, for the exact same reason. Schrodinger’s equation is fundamentally incompatible with relativity. Here is an intuitive way to see why this has to be true. Relativity puts time and space on a fairly even footing; the Lorentz transformations allow time and space to be somewhat interchangeable, and so any physical equation that is compatible with special relativity must treat time and space in a similar fashion. But Schrodinger’s equation has a first order derivative in time, and a second order derivative in space. This is such a substantial difference that it cannot be squared with the Lorentz transformations. Schrodinger’s equation is fundamentally non-relativistic. And so Schrodinger’s equation is fundamentally not used by the universe.

    So what does it take to square quantum mechanics with special relativity? Well, it takes field theory. Back in the 1920’s and 30’s, the founding fathers of quantum mechanics started with Schrodinger’s equation and manipulated it to try to make it relativistic. And they succeeded. They ended up with the Klein-Gordon equation and the Dirac equation – the core equations of field theory. But, even though they derived these equations way back in the early days of quantum mechanics, Dirac and others didn’t really understand how to use these equations. They are are difficult to interpret, predict weird things, and back then physicists didn’t have enough experimental evidence to vindicate them. But that changed in the 1960’s as a result of Richard Feynman and others, who figured out how to properly interpret these equations, and how to mach them to experimental evidence. Thus field theory was born.

    Field theory does not have wave function. It does not have collapse, or observers, or any of these other troubling things that Schrodinger’s formulation of quantum mechanics has. Field theory has fields. When you average out the behavior of fields, if relativistic effects are small, then the math simplifies to a form identical to that of wave functions and Schrodinger’s equation.

    And so Schrodinger’s equation, and all of the apparatus of that branch of quantum mechanics is exactly analogous to classical mechanics. It is an excellent tool for any modern physicist to have in their toolbox, because it is a very useful approximation for how the world works that is much easier to use than the true relativistic models. But neither Newton’s laws nor Schrodinger’s equation are in any way fundamental. The universe does not truly obey either.

    I am not really qualified to talk about field theory. But it is my understanding that it simply does not have the problems that non-relativistic quantum mechanics has. When you ask about how to interpret a quantum observation, field theory doesn’t even know what you are talking about. Eliezer goes on this diatribe about how wave-function collapse is non-local, non-unitary, etc, and so MWI is the only valid interpretation of QM. Well, fields have all of the properties Eliezer wants them to have, and don’t know what you are talking about when you say ‘collapse’. And they sure don’t need many worlds to make sense. Actually, field theory is really similar to many worlds. The way you mathematically process a field, the way you turn it into something useful that can make a prediction, is you average all of the possible routes a particle can take from its source to its destination, in the same way that MWI proponents do. But field theory doesn’t require you to say that all of those possible other routes the particle could have taken from source to destination are real in other universes. Those other particles are just a tool that we use to calculate how the field evolves. The field is the real fundamental thing that actually carries information in the real universe.

    So… physicists of SSC… am I totally wrong here, or are the MWI folks the ones who are wrong?

    If I am right, then I think the mistake Eliezer made was simply not realizing that the physics he was studying was obsolete. And to be fair to him, field theory is so damn difficult that even physics PhD students don’t get more than a cursory education on it unless you are going to specialize in that field. In contrast we get about 2.5 years of education in Schrodinger type quantum mechanics.

    • lightvector says:

      I may easily be wrong (I have no direct formal knowledge of QFT), but my understanding is that Schrodinger’s equation is supposed to still apply even for QFT. It’s just that you need to use the most general form, which characterizes time evolution as a function of the Hamiltonian, and says nothing whatsoever about spatial derivatives.

      If you plug in the Hamiltonian for a single non-relativistic particle in the position basis and specialize Schrodinger’s equation to it, then you do get this second-order derivative in space. But that’s only if you specialize it to a universe consisting only of this single non-relativistic particle.

      So my limited understanding was: QFT still has its own Hamiltonian, it’s just a LOT more complicated, and that it still obeys the basic QM formalism – linearity and unitary time evolution in a Hilbert space whose dimensions correspond to the “states” of the system. And that Schrodinger’s equation may then also no longer always be the thing you want to most generally use, even if the system obeys it, because you do prefer something more specialized to the particular way you are constructing this Hamiltonian.

      A quick look at https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation#Relativistic_quantum_mechanics seems to suggest something that is at least not terribly contradictory to that understanding.
      (also a Google search turns up this, which I don’t see a strong reason to distrust, for now: https://www.quora.com/What-happens-to-the-Schrodinger-equation-and-wave-functions-as-we-go-from-non-relativistic-quantum-mechanics-to-quantum-field-theory)

    • smocc says:

      And so Schrodinger’s equation is fundamentally not used by the universe.

      Field theory does not have wave function

      These are both wrong in ways that are really crucial to understand.

      If by Schrodinger’s equation you mean i d\psi/dt = -\hbar^2 d^2\psi/dx^2 + V(x)\psi(x) then yes, that Schrodinger equation is an approximation for non-relativistic particles and |\psi(x)|^2 gives the spatial probability distribution of that particle.

      But if by Schrodinger’s equation you mean i d/dt |\psi> = H |\psi> where |\psi> is a general state and H is a hermitian operator whose eigenvalues correspond to the energies of the system, then all quantum mechanics uses Schrodinger’s equation, even quantum field theory.

      As for wavefunctions, if by wavefunction you mean a function whose absolute value squared gives the spatial probability distribution of a particle, then no quantum field theory does not have wavefunctions.
      But if by wavefunction you mean a collection of coefficients that tells you how a given state is built out of basis states, then yes, quantum field theory has wavefunctions.

      It is a common misconception that quantum field theory is about how quantum behavior of particles arises from classical fields. That is not the case. For example, every field has an associated particle, but the state “a single particle with momentum p” does not correspond to a specific arrangement of the corresponding field. Rather, it corresponds to a linear combination of state possible configurations for the underlying field, in the same way that a general state for a non-relativistic particle is a linear combination of many possible positions states.

      Because of this, quantum field theory is not an answer for quantum weirdness. QFT still has linear superposition, it still has the Born rule, it still has unitary time-evolution.

      • mustacheion says:

        As for those two points, I intended the first sense in which we both agree, but I accept your point that the terms ‘wavefunction’ and ‘Schrodinger’s Equation’ are broad enough that the statements I made were incorrect.

        I am not at all under the misconception that QFT is made from classical fields. It is unfortunate that the term field is overloaded with those two very distinct meanings.

        Of course QFT still has linear superposition, Born rule, and unitary time evolution. In my mind those are all totally reasonable, comprehensible, not-at-all-weird things that have a very clear interpretation that in no way whatsoever benefits from MWI or Copenhagen, unlike (what is the term, classical quantum mechanics?). By quantum weirdness I mostly mean wavefunction collapse which, in my head, does not occur in QFT. But here I admit that this is where I lost the ability to follow the mathematics of QFT and switched specializations, so I suppose I ought not have much confidence in this claim. I think at this point it would require a laborious discussion of thought experiments that invoke classical wavefunction collapse like Stern-Gerlach from the QFT perspective in order to explore this idea further, but I don’t feel like I have the time or energy to do that at the moment.

        • smocc says:

          QFT absolutely has wavefunction collapse in that the same initial state can end in only one of many distinct final states, and the probability of any one outcome is given by the square of a complex amplitude.

          For example, when you shoot an electron and a positron at each other your may end up observing another electron-position pair, or a muon-antimuon pair, or two Z bosons. The state “collapses” to one of these outcomes every time you do the experiment.

    • sovietKaleEatYou says:

      Quantum field theory is wonderful from the “shut up and calculate” point of view but I don’t know if it even has a prediction of “what does the universe look like”. In the sense of, I don’t think you can come up even with a toy theory that you can run on a (hypothetical infinitely fast) computer to the point where you can run a simulation and isolate in your simulation something phenomenologically resembling the real world.

      The best “descriptive” theories people have at the moment are GR and nonrelativistic quantum mechanics. They are obviously not good enough to describe the full picture (neither, for that matter, is QFT), but they are good enough to come up with simplified models which we can then describe well enough to simulate the universe on a hypothetical supercomputer. Now properties of such models may or may not survive all the way to the “grand unifying theory” (if we ever discover one), but an ontological property that all existing models have has a good chance of surviving. So locality and topological weirdness from general relativity are expected to survive *in some form* in a grand unifying theory (and consistently agree with observations, such as black holes which started out hypothetical). And the idea from quantum that the world as we know it represents an attempt to make sense of a quasi-independent piece of a vast multi-universe wave function seems to be sufficiently common in models of quantum mechanics (something that, once again, also makes weird predictions that are consistently confirmed by experiment) seems likely to make it into the “full picture” in some way. Of course it might not: physics is incomplete and anything we believe may turn out to be wrong.

      • mustacheion says:

        Yeah. While I can grok Stern-Gerlach, I run into combinatoric explosion when I try to grok Quantum Zeno experiments.

        Since I cannot do the math, and since there is no physics without math, I am just a quack, and nobody should take me seriously.

        I am grudgingly updating in the direction of many worlds as the least bad of all of the bad interpretations of QM. But like any good quack I am really just waiting for the day that somebody else discovers the real interpretation so that I can revel in having been less wrong than everybody else :P.

  27. carsonmcneil says:

    Well, this will probably get lost in the comments again… Yes, defining simplicity is not trivial, but there IS a rigorous mathematical definition! VC dimension describes simplicity in terms of the number of data points you would need to “shatter” the model, i.e. make it unable to fit SOME possible combination of points. This makes it clear that at a deep, mathematical model, simplicity and falsifiability are intimately tied.

    See the excellent proof by Vladimir Vapnik (the guy who invented SVMs) here: https://mostafa-samir.github.io/ml-theory-pt2/

  28. Iago the Yerfdog says:

    As I alluded to in a comment made last open thread, my problem with MWI is that I’d have to effectively forget that I believed it every time I made a decision. Because if MWI is correct, I already know what I’ll do: everything.

    “I know that I’m going to both buy this lottery ticket and not buy it; now let me sit down and decide whether or not to buy it,” is a nonsensical statement, especially when I both will and will not sit down to make the decision. The whole point of deciding between two actions is that I’ll only do one of them.

    I’ll go one step further: I don’t think anyone really can believe in MWI. If you ignore the practical implications of a belief anytime they are relevant, then there is no difference between believing it and merely believing that you believe it.

    • Thegnskald says:

      That approach is wrong for the same reason that thinking you have a 50/50 chance of winning the lottery, because there are only two possible outcomes, is wrong.

      • The Nybbler says:

        MWI implies you have a 100% chance of winning a “quantum lottery” in some universe. Every time. No matter how improbable, any combination of quantum events happened somewhere.

        • Thegnskald says:

          Not… quite. The whole “universe” thing is highly misleading.

          But close enough. And that doesn’t matter. You should expect to end up in the most likely universe with you still in it.

          And one major practical ramification is that suicide won’t work, if we want practical ramifications, and indeed will most likely just result in things being worse for you.

          • The Nybbler says:

            You should expect to end up in the most likely universe with you still in it.

            Why? All of those universes exist. There is no longer any single “me”.

          • Thegnskald says:

            Yes, and?

            Suppose in ten minutes Omega will start six billion simulations of you, as you are nine minutes from now.

            Do you expect to be in a simulation eleven minutes from now?

  29. mika says:

    I’ll post this here again since people still seem to be missing the point about MWI https://www.forbes.com/sites/chadorzel/2019/09/17/many-worlds-but-too-much-metaphor/#4531af66625d

  30. Chad_Nine says:

    Is this an attempt to justify special pleading on an idea you favor?

  31. vicoldi says:

    My favorite story about two theories making identical predictions:

    Tycho Brahe postulated that the Sun revolves around the Earth but every other planet revolves around the Sun. It is equivalent to Copernicus’ model under an other reference frame, therefore makes the same predictions.

    Still, Occam prefers the heliocentric model, and Newton had much better chance figuring out the underlying laws of gravity using Copernicus’ interpretation than Tycho Brahe’s one.

    I think there is a reasonable chance that quantum physics is in a similar state: the two models make the same predictions but many worlds might give a more correct understanding of the world, so it might be necessary for figuring out the deeper laws of quantum physics.

  32. williamgr says:

    wavefunction branches, and then somewhere we can’t see it one branch mysteriously collapses

    I’m not sure if this is a misunderstanding of what (supposedly) happens in the Copenhagen interpretation, or just a case of using a “specialised” term (collapsing) but with a different meaning. In the Copenhagen interpretation the collapse is the evolution from the initial state into the observed state (i.e. it’s the initial wavefunction that collapses, not the wavefunctions corresponding to the non-observed states).

    The problem with the Copenhagen interpretation isn’t really that the wavefunction ends up in only one of several (previously) possible states — when you throw a die and get a six you don’t start going around worrying about what happened to the one, two, three, four or five — but that the way it separates the world into the observed (quantum) system and external (classical) observer is completely arbitrary and lacks predictive power about what happens once you include yourself within the quantum system (which you obviously have to, since you are also described by quantum mechanics).

    • Squirrel of Doom says:

      This ties into why I always hated “Schrödingers cat”:

      The cat must be an “observer” (at least if alive), so the whole story makes no sense.

  33. Squirrel of Doom says:

    Here’s what I’ve always wondered about the multiple universe theory:

    Wave functions collapse an awful lot. Probably at least a trillion times a second in my left nostril alone.

    So does that mean the theory says a trillion new universes are created per second from my left nostril alone?

    As someone with a degree in physics, this certainly violates my preservation of energy instincts…

    But maybe I’ve misunderstood something?

    • Thegnskald says:

      It is a little simpler than that.

      Remembering that MWI is ultimately the intellectual descendant of pilot waves, it is misleading to think of new universes being created; it is more like a bunch of waves sloshing around and interacting in a complex structure.

      The “branching” can be thought of more as… the waves moving in different directions. The medium was already there, was always there. What changes is how much of the medium any given behavior-region of waves takes up. (This is often called amplitude.)

      So it isn’t that new universes are being created, it is that the space of.the universe you are observing is continually being narrowed/constrained. It’s all still there.

    • benf says:

      Conservation of energy is not actually a fundamental law of nature, only a local tendency. The universe is not time-symmetric so there’s no corresponding conservation law. See:

      https://www.youtube.com/watch?v=04ERSb06dOg

  34. Jiro says:

    The process that makes wavefunctions collapse is an extra assumption!

    You’ve figured out that “which theory is simpler?” is nontrivial, but you’ve failed to figure out that that applies here too. You’ve just stated that many-worlds doesn’t involve extra moving parts, without really defining “extra moving parts” in a way that covers the relevant edge cases.

    And the simplest explanation for dinosaur fossils is neither one devil nor a billion dinosaurs; the simplest explanation is being a brain in a jar. It has only one assumption and explains everything. We reject “brain in a jar” and “the Devil made dinosaur fossils” because they make too few predictions, not because they contain too many assumptions; it’s plausibly correct to count the existence of dinosaurs as an assumption.

    • Thegnskald says:

      I think you forget all the assumptions that end up being implicit in “brain in a jar” – namely, the series of conditions that give rise to the state of the universe being projected to that brain.

      • Jiro says:

        That reasoning would not make it more complicated than “the devil creates dinosaur fossils” because the same objection would apply to that. The Devil is creating fossils that down to every minor detail are exactly like what real dinosaur fossils would be like, so all the complexity of actual dinosaurs is also complexity of the Devil theory.

        • Thegnskald says:

          Granted. I was more objecting to the idea that “brain in a jar” is simpler than “billion dinosaurs”.

  35. After all, Occam says not to multiply entities beyond necessity. And if the dinosaur theory posits a billion dinosaurs, that’s 999,999,999 more entities than are necessary to explain all those bones.

    To convincingly construct the fake ecosystem, the devil would have had to simulate all those dinosaurs somehow.

    How much can you comment about this subject, without really understanding the math? Objective-collapse and pilot-wave theories seem intuitively simpler than many-worlds. But then you hear that pilot-wave theories propose “empty branches” which evolve forever but don’t affect anything because they no longer guide a particle, rather wasteful. And collapse theories aren’t really as minimalistic as they seem at first glance, if you assume an can be “decayed and not decayed,” that a cat can be “alive and dead,” it’s no great leap natural to assume that a human can observe the cat to be “dead and not dead,” and so on. But to really say which is simpler I’d have to understand the Schrödinger equation, and I don’t really. So the only thing I have to add is that when I hear the claim “we can’t ever tell the difference between these theories because they all have the same predictions,” I’m skeptical. You don’t have to understand these disciplines on a deep level to observe that they have continually improved their ability to see areas they previously thought they could never see. Maybe they all make the same predictions right now, but will that still be true in fifty years with better tools? For a while the heliocentric and geocentric theories made the same predictions, but with better tools they were able to find a parallax.

    It’s interesting how little retort to credentials you find in these discussions. There’s quite a lot of uninformed commentary on all manner of subjects, and so I assume a lot of the commentary on this subject would appear to an expert in physics to be similarly uninformed. Perhaps my commentary above laughably misunderstands something. But you don’t hear “get a degree before commenting on this!” You don’t need that argument because you can just say “you don’t know anything, this is obvious, go and learn it.” Credentialism is found in those fields where the credentials are least useful.

    • jermo sapiens says:

      Credentialism is found in those fields where the credentials are least useful.

      Great observation. My own understanding of QM is “watched alot of QM documentaries and read some pop-science books on it”, so I’m hopelessly out of my depth, even in this comment section. So you dont need credentials to keep me from publishing an article in Nature on my cannabis-inspired thoughts that QM is like “totally out there maaaaan”.

      Fields that rely on credentialisms are the ones where the Sokal squared experiment are possible, exactly because the Sokal squared experiment is possible in those fields.

    • broblawsky says:

      then you hear that pilot-wave theories propose “empty branches” which evolve forever but don’t affect anything because they no longer guide a particle, rather wasteful.

      Wasteful of what? Not energy, because no work is done. Wasting processing power is only meaningful if you’re talking about a simulation. The universe is under no constraint to follow principles of efficiency or elegance; these are preferences we impose on the models we create to try to understand the universe.

      And collapse theories aren’t really as minimalistic as they seem at first glance, if you assume an can be “decayed and not decayed,” that a cat can be “alive and dead,” it’s no great leap natural to assume that a human can observe the cat to be “dead and not dead,” and so on.

      A better way of looking at it might be to say that there is a vanishingly small probability that the cat will be observed to be both “dead and not dead” at the same time. Macroscopic collections of particles are (almost) permanently collapsed in position, due to particle-particle interactions.

      • It’s wasteful in the same way that the devil is wasteful and that MWI feels intuitively wasteful by assuming that parallel universes are required to produce what appears to us to be a single universe. The wave in pilot-wave theory is a physically real field which guides the physically real particle, and thereafter continues to evolve forever while affecting nothing in the “real” world. MWI saturates us with a multitude of universes and pilot wave theory saturates us with a multitude of empty waves.

        A better way of looking at it might be to say that there is a vanishingly small probability that the cat will be observed to be both “dead and not dead” at the same time. Macroscopic collections of particles are (almost) permanently collapsed in position, due to particle-particle interactions.

        Yeah, but once you assume that a particle can be decayed and not decayed, you’ve already crossed the rubicon, and can no longer say “well a particle can be be in two places at once, but saying that I can is crazy.”

        • broblawsky says:

          It’s wasteful in the same way that the devil is wasteful and that MWI feels intuitively wasteful by assuming that parallel universes are required to produce what appears to us to be a single universe. The wave in pilot-wave theory is a physically real field which guides the physically real particle, and thereafter continues to evolve forever while affecting nothing in the “real” world. MWI saturates us with a multitude of universes and pilot wave theory saturates us with a multitude of empty waves.

          The Devil-as-fossil-planter is epistemologicaly problematic in the opposite direction – it uses unfalsifiable statements to create a much simpler (and arguably more elegant) universe. Pilot-wave and many-worlds interpretations are also unfalsifiable, but in a way that makes the universe more complex.

  36. kalimac says:

    Dumb question, possibly, but what’s the difference in hand-waving assumptions between “and then somewhere we can’t see it, one waveform branch mysteriously collapses” and “and then somewhere we can’t see it, another manyworlds alternate universe comes into existence”?

    • MicaiahC says:

      The way I reconcile with it is this: In (some versions of?) MWI, it is already “splitting” along every observable basis some large number of times: When you conduct an experiment, you’re only finding out which half (or third, or weird ass fraction) of universes you ended up in (say, the half of universes where the electron was measured as spin up, versus the half of universes where the electron was measured as spin down). Hence, you only see a single result of the experiment, hence Many Worlds.

      It’s much harder to argue that branch collapse only happens when you measure, because you’d have to explain what happens when you *don’t* measure, and what the wave equation says is that it sure looks like Many Worlds are out there.

      • kalimac says:

        I thought that if you don’t measure, the waveform doesn’t collapse.

        Your reconciliation doesn’t help me understand, because it assumes the answer to the question I’m asking (i.e. that the many worlds you can’t detect are actually there).

        • MicaiahC says:

          I was replying to your original question, which I read as asking what the difference between “what happens when the branch collapses and a world is created” which implies that both are arbitrary about the timing and placement of both collapse postulates and universe creation and hence incorrect.

          I responded by saying that your second branch of that question is based on incorrect premises, there’s no arbitrary universe creation in MWI because it’s *already* happening all the time and this is not in fact symmetric. I don’t see how saying “it assumes the answer to the question I’m asking” is a valid objection because I’m saying that your question is already wrong about what one side (necessarily) has to assume.

          If you wanted to ask about why one theory is preferred in general over the other, you can ask that question, but I don’t think it’s very nice to ask a specific question and state that the answer is insufficient for a completely different and more general question without trying to understand why that original answer was given.

          I apologize if I was too brief or not understandable enough, but I can’t move forward if explaining my position results in: “well that’s not valid because it assumes the conclusion” and there’s no follow on as to what you’d consider a valid argument.

          • kalimac says:

            Sorry for any misunderstanding, but stating that one side’s handwaving is different from the other side’s because it was already handwaving before the question was asked – that really, really doesn’t help. If that’s going to be your answer, consider the follow-up buried in the original question, the way that a follow-up is buried in the question “Do you have the time?” to which “Yes” is technically a complete and adequate answer.

        • benf says:

          “I thought that if you don’t measure, the waveform doesn’t collapse.”

          A better way to say would be if the wave function doesn’t INTERACT with anything, it doesn’t have to interact in any particular way. This resolves the utter mystery into something that is so intuitive it borders on the tautological.

          • dionisos says:

            Except sometime it interacts and doesn’t collapse at all.

            In fact if there were no interactions without collapse, there would not be a need for something as quantum mechanics in the first place.

      • Thegnskald says:

        You may want to look into the Elitzur–Vaidman bomb-tester.

        • MicaiahC says:

          I had to do this as a homework problem four years back and I’m not sure what I’m supposed to take away from this by looking into it. Is it the MWI section on the wikipedia page? Because I remember thinking along those lines and not being disturbed by the existence of this experiment at all.

          Can you expand if it’s not that?

          • Thegnskald says:

            If the universe has “split”, such that the universes no longer interact/interfere, and you are only finding out which universe you are in, the experiment doesn’t work. (Actually a bunch don’t work, in particular quantum erasure.)

            At minimum your description needs a mechanism for universes to recombine.

            To illustrate this, let’s create a particularly insane version of quantum erasure: A physicist is in a configuration where a machine will wipe their memory of an event completely in configuration A, and won’t in configuration B. Put this physicist in an experiment whereby they observe an intermediate state of an experiment.

            In configuration A, with the mind wipe, the next step of the experiment will behave as if the intermediate state wasn’t observed. In configuration B, without the mind wipe, the experiment will behave as if it was observed.

          • MicaiahC says:

            At minimum your description needs a mechanism for universes to recombine.

            Oh, I just took it as read that interaction / interference was happening, since the wave function is defined over complex phases. I was wrong in being so brief and not including that in my explanation. Thanks.

            I will note that I don’t particularly think of it as universes recombining (since the rest of the universe you are not modeling in the coherent state can still evolve differently and thus be distinguishable), but that’s quibbling over wording: We both agree that at the level of the experiment they are recombining.

            To make sure I understand correctly, when you say “will wipe the memory” in the quantum eraser experiment, you also mean wipe / reverse the state of the experimental equipment / anything that can decohere the eraser right? Because minds aren’t preferred over any other state of high energy / entropy is the key insight I value from MWI.

          • Thegnskald says:

            Yeah, I mean “wipe all correlation with the observation”.

        • kalimac says:

          I read the Wikipedia article, which only brings up manyworlds briefly at the end. It says that’s it’s paradoxical for the system to know the status of an item without touching it, but the opening discussion in the article already explained why there’s nothing paradoxical about that. In any case, if it is paradoxical, I don’t see how it’s any less paradoxical if the result is different in some other universe.

          • Thegnskald says:

            The problem is that the outcome of the experiment depends on the universe not “splitting” without observation.

            The role of the observer is important, even if it isn’t special.

  37. jermo sapiens says:

    I forget where but I’ve heard this said about MWI: Better to have infinite universes than one God.

    The idea (I believe) was that the Copenhagen interpretation was used by theists to suggest that the laws of physics were compatible with the existence of God, human souls, and free will. Basically, since the universe is not deterministic, God could act and produce miracles without violating the laws of physics, and souls could produce free will by affecting quantum outcomes in the brain.

    Just googling around I see that phrase is used more to explain why the laws of physics are so finely tuned to allow for life, and not with respect to QM, so maybe I just misconstrued it when I first heard it.

    • sclmlw says:

      I’ve never heard that explanation. My (limited) understanding is that the principle of superposition suggests that both states exist simultaneously, as in the 2-slit experiment, where you get a wave pattern, even if you use individual electrons. Meanwhile, once the wave function collapses, for that moment the universe “loses” the other states. You have to be able to explain what happened to those states, because we can’t just say we were wrong about whether they were there, since they affect the outcome. They were there, until they weren’t. This is the weird thing about the cat in the box that is both alive and dead. It actually has to be both at the time time, not just that we don’t know which one it is.

      MWI gets around that problem by saying that both were here, and existed, until the wave function collapsed and then both were realized as independent universes. Where did the superposition go? It split off from our universe into another, so it doesn’t really matter anymore. The other option/branch/superposition state still existed, but in a way we can trivialize, since it can’t impact our universe anymore.

      • jermo sapiens says:

        That’s not my understanding but most likely then not my understanding is incorrect.

        My understanding is that what passes through the 2 slits is not the particle, but the wave function. Upon wave collapse, the other states are not lost because they never existed, they just had a probability of existing as represented by the wave. What we experience as a particle is just a localized effect of the measurement which caused the wave function collapse.

        • sclmlw says:

          If they never existed, how did they cause interference? My understanding is that the wave function is a mathematical representation of an underlying process/reality, not the ‘thing that passes through’ the slits itself, but maybe you’re using the term as shorthand for ‘what the wave function represents’.

          See also this video about locality. Observations from experiments seem to suggest that there isn’t some hidden variable stating whether the photon/electron/whatever will pass through one slit or another, and the universe reveals which one it was all along once it’s measured.

          • jermo sapiens says:

            I dont really know. I just have a model in my head that makes “sense” to me and it’s most certainly wrong.

            Whatever passes through the slit is something that interferes with itself. So it has to be a wave. Particles dont interfere with themselves, waves do. I dont think anybody is claiming to know what the wave itself is, but if it’s interfering, it’s a wave.

            Observations from experiments seem to suggest that there isn’t some hidden variable stating whether the photon/electron/whatever will pass through one slit or another, and the universe reveals which one it was all along once it’s measured.

            Yes, I understand this. In my (almost certainly wrong) mental model, this isnt a (fatal) problem because when the electron is not being observed (as it travels from the source, through the slits, and to the detector), the electron is a wave and not a particle. It only behaves as a particle when it’s observed.

          • migo says:

            I have similar a intuition to jermo sapiens. I would only add that the wave function (+ born rule) is a probability function describing the behavior of the particle when observed. (This sort of leads to a simulation-based view of the universe, emerging as an intricate system of realisations of random variables and evolutions of probability distributions)

      • benf says:

        You have a batter on a three-two count. He has a certain probability of hitting the next pitch. The pitch arrives, the batter swings, and he either hits it or he doesn’t. What happens to all the other states? Where do they go?

        Hopefully the example illustrates my problem with the question.

        • sclmlw says:

          I think this is different fundamentally from the Measurement Problem of QM, since the different stated didn’t interfere with one another to influence the final result. A quantum analogy would be something like,

          “A batter swings and either hits the ball, misses, or pops a foul. The final state of the ball at the end of the play depends, not only on whether the batter hit the ball, but also on all the other possible ways that play could have gone. At the instant the ball hits the shortstop’s glove, though, the ball is in only one location. Did the ball travel faster than the speed of light such that the other states that influenced the system disappeared? Did the ball follow a ‘pilot wave’, effectively ‘choosing’ one state from the beginning but depending on the existence of all the other states to reach its final destination? Did all the other states magically cease to exist? Or did all of them happen, but the universe split off to allow those other states to continue to exist independently after the ‘measurement’ was made?”

          Analogies to normal physics don’t work well, because they are counter-intuitive to what we actually experience. But there’s a problem for physicists with ignoring this question altogether. You can’t say the other slit wasn’t ‘chosen’ in the two-slit experiment, because it’s clear the wave function determined the final resulting position, not a simple either/or decision tree. Since the other states determine the final outcome, but ‘disappear’ once measurement happens we need some way to explain the observation that accounts for this. Or in other words, if we accept that the unmeasured state is a superposition of all possible states, what happens to the other possible states – which have a real impact on the outcome – when measurement occurs?

    • Robert Jones says:

      God has to be very careful in doing this, because the observations have to comply with the predicted probability distribution. I think God could put in the odd miracle here and there without throwing the numbers off in a detectable way, but he can’t intervene routinely. This may or may not be compatible with any given theism.

      Enjoyable though Penrose’s book was, the idea that quantum effects are important in the brain is bilge, even before one gets to the spiritualist part. Essentially, randomness is no better for free will than determinism.

      • benf says:

        “Essentially, randomness is no better for free will than determinism.”

        I used to agree but now I don’t. True Copenhagen randomness gives free will elbow room because it allows for the future to not be totally determined by the past. Randomness can accumulate through natural selection into order which is not chained to an infinite regress of necessary causes. That’s enough space for free will to do most of what we want it to do.

  38. Adama says:

    Please forgive me if this point has been raised already:

    Problems with Occam’s razor tend to arise when simpler is wrongly understood to mean “to have fewer moving parts” rather than the more accurate “to make fewer assumptions.”

    Evolution is far deeper and more complex (many more moving parts) than creationism, but the former in its proper form eschews assumption while the later depends on it entirely.

    Evolution is certainly the Ockham-preferred theory.

    • sclmlw says:

      It’s a lot of comments to go through. I made the same point above and in the previous post on this subject, but Scott seems to be making the same error. The other half is that this isn’t about telling us what’s ultimately “True”, so much as choosing between competing hypotheses. Presumably, invocation of Satan requires fewer assumptions that the completely hypothesis-free “every bone appeared there as if by magic”, but that’s not what it’s competing against. In the end, all these are just heuristics about how to form hypotheses. Confidence comes when we actually test the hypothesis to determine whether it has predictive power. Until then it’s just philosophy.

    • bagel says:

      Even assumptions are, after a fashion, moving parts. I remember a wonderful lecture from my undergrad, where the prof recounted a story of some philosophers who were considering two particular sets of axioms, which had different minimum representation sizes in different languages. In which language should you measure assumptions?

      The philosophers in question were Greek, so the answer was obviously Ancient Greek! They didn’t see what all the fuss was about. 😉

      In a more formal sense, Occam is definitely onto something in that further you can be from an accusation of over-fitting, the more impressive it is if your proposed function can explain the data. But how far that can take you is less clear.

    • Roebuck says:

      I feel similarly.

      The reason we don’t like the Devil is roughly “if we had ocean fish fossils on top of Mt. Everest and geography couldn’t explain it or in general something would be weird about the fossils, we would capitulate but you would still claim a Devil, just a different one”.

      It is unfair that, in face of N specific observed fossils, we compare the evolution theory to “a Devil that would plant these N specific observed fossils”. Such Devil is a very refined theory and a very accurate one, true, but we have no illusion that if any of these fossils were different, the proponents of the existing Devil theory would have proposed a different Devil theory, again a very specific and accurate one.

      It’s rather more appropriate to compare evolution to the class of all possible Devil theories and conclude that evolution makes a specific claim about how the fossils will relate to each other (for example, we expect to find many small animals’ fossils that are older than the oldest fossil of a big animal such as an elephant) while Devil theories have something that fits any evidence.

      Not sure how to formalise it, but basically it’s about the accuracy vs complexity of competing theories, where complexity includes the flexibility of the theory – the degree to which a malicious actor could modify it to fit slightly different evidence

      • sclmlw says:

        I think you can simplify this in one stroke: We gain confidence in a model when it makes accurate predictions a priori.

        For the example above, one system is a model that makes predictions that can be tested. After testing those predictions we either have to change the model, or we gain confidence in the predictive power of the model. The other system makes no predictions and has nothing to say about each new observation. We have no confidence in its predictive power, nor do we expect to obtain confidence in it unless it makes predictions that can be tested. We may never have to update it in the face of new evidence, but its utility is purely philosophical.

  39. Star says:

    People confuse analogy for reality when discussing quantum objects.
    The wave particle duality permeates all quantum objects (why is it only taught to kids as a property of light when it applies to everything?). To ask which nature is more correct is to confuse incomplete models (analogies) for what is really going on; as opposed to seeing them as useful but limited tools.

    MWI is a model with no known engineering uses. Particle theories pay rent. I can build a solar panel with the photo-electric effect. The wave theory pays rent I can build a phased array radar that can paint every object in the sky faster then a rotating dish. I can’t build a fountain that takes water from the infinite parallel worlds. If someone can get that working great, till then tell me about your perpetual motion machine again… It’s a “not-even-wrong” type theory.

    As to wave function collapse, Bell’s inequality forces the issue. It sucks but we have to chose one to give up: locality, realism, or freedom-of-choice. As dropping locality torches causality, I’m not a fan. Dropping realism gets you MWI “everything happens… just like over there, where you can’t look” bleck why do people like this one, so sloppy. I guess I’m on team super determinism, cause if you looks deep into “free-will” can you tell me it’s a coherent idea? Something something pilot wave bla bla bla…

    On strings: I’ll be interested when they can make even one string hypothesis, till then theories lol

    Also count me on team the devil (gotta be on his good side these days, with the direction the world is heading and all)

    • benf says:

      We can redefine locality with an Einstein-Rosen bridge and harmonize Bell’s inequality with Einsteinian locality. ER = EPR is the buzzword and it strikes me a hugely compelling and in retrospect a little bit obvious.

  40. Alex M says:

    Well, I mean, the vanishing physicist has a point. If something is untestable, who gives a fuck? By definition, you have no way of testing and resolving it, so you put that question into a little bucket marked “philosophical questions for later” and sometimes you go back to that bucket when you have more empirical scientific methods to test out some of those questions. Revisiting that bucket before then is just a pointless waste of time for navel-gazers who have nothing better to do. This is actually my biggest pet peeve about the rationalist community – the amount of drivel written based on nothing more than speculation, with no executable plan or testing methodology. Tons of navel-gazing speculation written about “the nature of consciousness” and “how human psychology really works under the curtain” but the only ones who actually bother to test those theories out in real life are enterprising internet trolls. (Whom I actually have a lot of respect for, despite their misanthropy – at least they take decisive action to figure out how people’s minds work, instead of resting their assumptions on wild speculation and garbage credentialism.)

    Perhaps it would help if we defined the difference between philosophy, theory, and science.

    Philosophy = You have a hypothesis. You have no way to test it according to the scientific tests of replicability and predictability. It could be true, it could be false. Society should not treat this as anything more than speculation or religion, at best.

    Theory = You have a hypothesis. You have no way to test it according to replicability and predictability, but the evidence leans decisively towards the hypothesis being true. Society should operate as if the hypothesis were true, but remain open to changing their minds if decisive evidence is later presented against it.

    Science = You have a hypothesis, which has been successfully tested according to the replicability and predictability tests. The default societal assumption should be that the hypothesis is true, and society should use it to inform decision-making and policy.

    According to these easy reference guide, which bucket do each of these three assumptions fall into? More importantly, which bucket do many of our current “scientific” assumptions fall into? Perhaps the answer to that question might give us some important clues as to why the rate of scientific progress has slowed down. Perhaps our so-called leading “scientists” in some fields are so incoherent that they can’t even distinguish between science, theory, and philosophy, which would indicate that a purge of those fields is in order.

    • This is actually my biggest pet peeve about the rationalist community – the amount of drivel written based on nothing more than speculation, with no executable plan or testing methodology.

      You can figure a lot out based on speculation, someone in the middle ages figured out that people in the southern hemisphere would see the sun in the north and experience seasons opposite those of Europeans. You could have told him “you have no way to prove any of this, if you actually have a plan to go there and find out, then I’ll believe you.”

      There’s a big difference between “no way to test it right now” and “no way to ever test it.”

      • Alex M says:

        There’s a big difference between “no way to test it right now” and “no way to ever test it.”

        I agree with you; that’s why the little bucket I mentioned is marked “philosophical questions for later”.

        You can figure a lot out based on speculation

        I agree – you can also figure a lot out that is completely incorrect based on speculation. Just because your particular example happened to have the speculation be correct doesn’t disprove the numerous other times that speculation led science down the completely wrong path.

        I would say that on the whole, theories that are based on speculation turn out to be garbage more often than they turn out to be gold. Coming up with 100 speculative scientific theories of which 1 is true is terrible science. Even coming up with 100 scientific theories of which 80 are true is terrible science, because the remaining 20 theories which are completely false will be treated as if they are true and get built upon by future scientists. This gradual accretion of bullshit is how we wind up with entire fields of science that are completely fake, such as astrology, phrenology, sociology, and economics.

        • Thegnskald says:

          Give me ninety nine bullshit ideas and one novel insight into the universe; I’ll take that deal.

          • Alex M says:

            Do you realize how much societal damage bullshit ideas can do when they’re allowed to gain traction? Bullshit ideas can literally destroy societies and cause mass death. We’ll hopefully never find out how much destruction a really bullshit idea could cause, but I suspect it may have something to do with The Great Filter.

            If you’re going to say something this unwise, I’d appreciate if you could at least put some effort into explaining your reasoning. To me, your comment of “I’ll take that deal” is the equivalent of saying “I’d pet a tiger if it had shiny fur” or “I don’t know about you, but I’d certainly play in traffic!” I don’t even know how to respond to that. Good for you I suppose, but meanwhile the rest of us are over here in the branch of reality where survival instincts matter, and you’re going to need to explain your thinking to the rest of us. Bullshit ideas cause mass death and unless the one really great idea outweighs the 99 bullshit ones, it almost certainly isn’t worth it. I think it would be useful to have a methodology to filter out bullshit in science, so that we could start pruning defective branches.

          • Thegnskald says:

            What percentage of ideas do you think end up being true?

            1% looks like an amazing deal to me compared to my expectation of the baseline.

            I’ll take your deal because it looks like a better deal than what the universe currently has on offer.

          • Alex M says:

            What percentage of ideas do you think could result in societal collapse? Or complete extinction of humanity? If you believe any such ideas exist, you would be a lot more cautious about randomly pulling ideas out of the urn.

          • Thegnskald says:

            Given your attitude, I assign some possibility on the idea that the idea of the Great Filter might be capable of destroying a society, or at least filtering it out from ever being capable of the sort of things that would provide evidence of it’s own existence to those looking for it.

          • Thegnskald says:

            Alternatively, if you prefer, clearly fretting about the Great Filter didn’t do any of the massively abundant civilizations that fretted about it any good. So I’d rather just see as much of the universe as I can before the cosmic horror, which gets everyone no matter what they do, comes around for me.

          • Alex M says:

            I feel like you’re not grasping my point adequately, and perhaps that’s my fault for not framing it correctly. Let me try again. My premise is that not all ideas are good. There are three types of ideas.

            White Ball ideas are the types that are of unquestionable benefit to society. For example, electricity, the internet, or some hypothetical superforecasting technology would be white ball ideas.

            Black Ball ideas are the ones that Bostrom describes, which could easily wipe us out of existence. We have never encountered a Black Ball idea because if we had we would no longer exist.

            Grey Ball ideas are the ideas which cause an incredible amount of suffering and immiseration (mostly by virtue of stupid and ineffectual social engineering), but not enough to completely destroy entire societies. Ideas such as Lysenkoism, Economics, Sociology, and IdPol are “Grey Ball” ideas because they hold back progress and cause a lot of unnecessary suffering even though they are not dangerous enough to completely wipe us out.

            You seem to be under the impression that the current failure mode of society stems from the fact that we do not have enough “White Ball” ideas. What I am telling you is that you are mistaken. The reason our society operates in failure mode is because we have too many Grey Ball ideas. We do not critically need more good “White Ball” ideas (even though that would be nice to have): what we need instead is a more better way of filtering out the shitty “Grey Ball” and “Black Ball” ideas. In my opinion, you are completely misdiagnosing society’s problems and thus proposing the wrong solution. Our current scientific methodology has a higher than acceptable ration of “Grey Ball” ideas to “White Ball” ideas, and this is why human civilization is currently on a path to extinction. In order to fix this problem, we need to fix the way in which we conduct science.

    • Markk says:

      Your reference guide would seem to count as ‘Philosophy’ according to your reference guide.

    • Roebuck says:

      Not sure if you’re implying the contrary, but whether a religion is true has real consequences and the more I become convinced I have the right answer about religion the more likely I am to live in peace.

      But if you need a different example, think about economics. You have the 2008 financial crisis and you know that this crisis has different causes than the previous ones, that it’s occurring in a vastly different world than the previous ones and that each subsequent crisis will be different from this one too (although this assumption is not important). In other words, you have an event which happens once. You have many economic theories, each one supporting a different policy response. You need to distinguish which theory is more likely to be useful.

      It has real consequences but you cannot test it before you reach a decision on it. How do you proceed?

  41. fluorocarbon says:

    I’ve been trying to wrap my head around the difference between the many worlds and Copenhagen interpretations since the last post. As a layperson, I find it easier to think about cats in boxes than particles and waves. This is the way I’ve been thinking about it; I’m sure I got some parts of it wrong, and I hope someone more knowledgeable can correct me:

    Let’s start with Schrödinger’s cat thought experiment but make it a little more humane and say the cat could be either asleep or awake. If we open the box, there’s a 50% chance we see the cat sleeping and a 50% chance it’s awake. But if we hold our ear up to the box before opening it we hear it both snoring and meowing (double slit interference). If we open the box, take a look, then close, it we can only hear the cat snoring or meowing; we no longer hear it doing both. However, if we had opened the box with our eyes closed, closed it, then listened again, we would hear the cat both snoring and meowing again (quantum eraser). Also, all these weird things only happen if the cat is very small. For unknown reasons a large cat can never be in both states at once (Born rule).

    The many worlds theory says that there are two separate universes in the box that interfere with one another but only until someone looks inside the box. As soon as it’s open, the universes stop interfering with each but the observer’s universe splits in two: one attaches to the sleeping cat box universe and one attaches to the awake cat box universe. The Copenhagen interpretation says that the cat is literally both asleep and awake at the same time, but as soon as someone looks inside the box, the probability “collapses” and the cat is either asleep or awake.

    To me, neither one of these seems any simpler. When I first heard about the Schrödinger’s cat thought experiment, I thought the many worlds interpretation made more sense. But then I learned about interference. In many worlds, not only are there multiple universes, but the universes can interfere with one another but only for a while and only until it attaches to some other universe (decoheres). On the other hand, the Copenhagen interpretation doesn’t make much sense either.

    In examples I. and II. in Scott’s post, the theories are presented as A+B where A and B are unrelated and removing B doesn’t change the predictive power of the theory as a whole. For example, acids and bases combine (A) and, unrelated, there’s a supernova (B). In these cases it’s easy to simplify the theory by removing B. But I don’t think it’s the same thing when it comes to quantum interpretations. I could be and probably am misunderstanding things, but I don’t think Copenhagen is many worlds plus something else. I think it’s more like the geocentric vs. heliocentric models of the solar system. You can’t take away parts of the geocentric model and get the heliocentric model; they’re different theories entirely. We have to use different heuristics to figure out which one to believe.

    If I were to add my own section IV. to the post I would say: what if you had your memory erased and were placed in a universe where scientists were split between the Ptolemaic model and the Tychonic model of the solar system. What would be the correct choice to believe? They both make the same predictions. Neither one is really correct or simpler. That’s mostly how I feel about the two quantum models and for the time being I think the best choice is to wait for a quantum Kepler to discover quantum elliptical orbits and sort it out.

    • benf says:

      Try not to take the Schödinger’s cat example too literally. Holding your ear to the box is an observation, just as much as opening it is. It was actually devised as a way of arguing that superposition is absurd and there must be a hidden “truth” to whether a quantum state is one thing or the other. That has been pretty well sorted out as wrong: quantum randomness is truly random, period. No more levels of causality to unpack. But from a philosophical point of view this makes perfect sense: you can’t have an infinitely regressing series of causes. There has to be an uncaused cause. Quantum mechanics is what we figured out to describe the behavior of that fundamental level of reality, where things are not caused by other things, but CAN cause other things, and it’s at that level of causal interaction that they simply ARE one way or the other, because CAUSALITY does not permit the “cat” to be both alive and dead.

      The superposition thing is what the pre-causal world looks like. It’s certainly weird, but it’s not that weird, and actually it’s remarkable how intelligible it is: it’s essentially perfectly probabilistic.

      • fluorocarbon says:

        I’m just using Schödinger’s cat experiment as a metaphor for the double-slit experiment because I find it more intuitive. When I say that I hear the cat “both snoring and meowing” I mean that when I observe a single electron shot at two slits, I see a diffraction pattern.

        Quantum mechanics is what we figured out to describe the behavior of that fundamental level of reality, where things are not caused by other things, but CAN cause other things, and it’s at that level of causal interaction that they simply ARE one way or the other, because CAUSALITY does not permit the “cat” to be both alive and dead.

        I’m not sure I understand—don’t we observe that the cat is both alive and dead at the same time? The double slit experiment shows that our observations match what would happen if a single electron were to pass through both slits at once (unless you measure it right before passing through the slits).

        To clarify my point from the original post: when the many worlds interpretation was first explained to me by a friend, it was explained as:

        The cat is not both alive and dead at the same time. You’ve been fooled by those tricky Danes! Quantum effects means that there are actually two universes: one where the cat is alive and one where it’s dead. It’s just random and sciency, so we can’t know whether the cat is alive or dead in our universe until we open the box.

        My misunderstanding was reinforced by the (misleading) film strip graphic in the Wikipedia article about many worlds.

        When I did more research I found out that we really hear the cat snoring and meowing at the same time (i.e. we see a diffraction pattern). We really do observe what would happen if a single electron went through both slits. It doesn’t go through one slit in one universe and another slit in another universe, like in the simplified many worlds interpretation, but it seems to go through both slits! In our universe! Unless you measure it first!

        I was confused enough that I ended up taking out The Fabric of Reality by David Deutsch from the library yesterday. According to him (p. 47):

        In other words, particles are grouped into parallel universes. They are ‘parallel’ in the sense that within each universe particles interact with each other just as they do in the tangible universe, but each universe affects the others only weakly, through interference phenomena.

        So now I’m even more confused. The Copenhagen interpretation seems to say, “weird quantum stuff happens because of *mumble* *mumble* collapse *mumble* *mumble*.” That isn’t very satisfactory. But then many worlds comes along and says, “collapse is a totally stupid idea you idiots! It’s obvious that weird quantum stuff happens because *mumble* *mumble* interactions between universes *mumble* *mumble*.”

        Everything is so strange and experts disagree so much, that I imagine the likeliest scenario is that both interpretations are wrong and, even if we figure out the right answer, it may not be clear which one was more wrong.

        • benf says:

          Observing the diffraction pattern on the other side of the double slit experiment is NOT “observing the superposition” or “hearing the cat snoring and mewing”. The diffraction pattern comes from the ACCUMULATION of observations of SINGLE events at the end of the device, and those observations are CONSISTENT with the electron passing through either slit and the probabilities of observations at the end of the device destructively interfering with each other. The superposition is an projection within the model about the state of the electron BEFORE the observation.

          Perhaps if you frame it with way things might help: There is no electron before it is observed. There is a wave function. That wave function has properties such that, if you interact with it, it “pushes back” with the strength of 0.511 MeV (at rest). The “center” of that interaction will be the point-like spatial location we call “the electron”. But the electron is not “really” here or “really” there or “really” ANYWHERE absent an interaction. The “electron” is an abstraction, like the “center of gravity” of an object. Is the “center of gravity” real? Well, it allows you to make certain predictions about the behavior of an object when you apply another force to it. But you can tear apart the whole thing and never find the “center of gravity”. It’s a property of the system and the interaction and an abstraction that we use to predict the behavior of the system.

          The Feynman diagrams are not pictures of reality. They’re MODELS. You model the two particles as if they were Newtonian objects, and you can get pretty damn close to the right answer. But they aren’t. They’re fields. Fields are not particles, but they can be modeled using particles.

          If you need a cat/box metaphor, the box has no cat in it at all before you open it. The superposition is the odds distribution you have of getting an alive cat or a dead cat. That odds distribution includes complex numbers but is otherwise perfectly intelligible to our intuitions about probability.

    • Vermora says:

      I prefer to use the term “states” rather than “universes” to describe MWI scenarios.

      (The following is written from a MWI point-of-view. I’m not a physicist, and may be totally wrong about parts of this.)

      Inside of the box is in two separate quantum states. When you interact with the box – in any way, opening it or just putting your ear to the box – you go into to two separate non-interacting states.

      The size of the cat doesn’t matter. However, it is very very difficult to not interact with large systems. A few particles is just about possible. A box with a cat in it is out of the question. So you would have split into multiple states long before you opened the box.

      This isn’t a hard law about quantum mechanics, it’s simply just mechanically difficult to make a large system that doesn’t interact with it’s outside environment in any way whatsoever. The tiniest vibration from a shifting cat is too much.

      As soon as it’s open, the universes stop interfering with each but the observer’s universe splits in two

      I don’t think it’s that the different states of the box stop interfering with each other. It’s more than we can only make those kinds of observervations when we are only in one state and haven’t interacted with the box yet. Once you interact with the box and split into two quantum states, that interference is still happening but neither of your states can observe it.

      Nothing special happens when you interact with a quantum system (from a god’s eye point of view). But since you’re in two separate states now – corresponding to the two states of the dead-and-alive cat – you make different observations from when you were in just one quantum state.

      • > dead-and-alive cat

        Cats can’t be both dead and alive. That was Schrödinger’s point. It is strange that people think they can be.

      • TheAncientGeeksTAG says:

        Once you interact with the box and split into two quantum states, that interference is still happening but neither of your states can observe it.

        That’s what MWI-ers believe, and how they want it to work– but there is no particular reason would be non-interacting.

        Under a picture where measurement is entirely described by the SWE, a physicist observing a superposed state will go into a superposition of states as well. However, it is not obvious that each physicis-statet will have a classical-style perception of a cat that is either alive or dead: this is only the case if the basis is “just right” — hence the famous basis problem.

        As Penrose writes (Road to Reality 29.8) “Why do we not
        permit these superposed perception states? Until we know exactly what it
        is about a quantum state that allows it to be considered as a ‘perception’,
        and consequently see that such superpositions are ‘not allowed’, we have
        really got nowhere in explaining why the real world of our experiences
        cannot involve superpositions of live and dead cats.”

  42. Statismagician says:

    Perhaps I’m laboring under too classical an understanding, but to my mind identical observable predictions and identical predictions are not at all the same thing. A supernova, no matter how far away, is a positive prediction and the burden of proof is squarely on the chemists.

  43. sclmlw says:

    I think there’s some fundamental misreading of Occam’s razor going on here. First, the razor makes no claim about which hypothesis is “true”, since that’s not fundamentally knowable. It only helps us decide between two competing hypotheses which we should prefer.

    Second, the popular rephrasing, “the simplest solution is the best one” is a very misleading interpretation of the razor, since often the opposite is indicated by it. The best shorthand I’ve heard is, “between competing hypotheses we should prefer the one that requires the fewest assumptions.” That can easily be the more complicated hypothesis, so long as there’s a lot of evidence pointing toward the complex hypothesis that would require additional assumptions by the competing hypothesis.

    For the first case above, the supernova assumption is more than the nova-agnostic model, so application of the razor would lead us to prefer the ‘simpler’ model.

    For the second case, Satan burying fossils requires us to not only assume Satan, but also assume a bunch of things about Satan’s character, like that he’s a very hard worker, capable of burying lots of bones, in specific patterns, and has been successful at fooling scientists with bones that are non-contradictory because he’s good at mimicking creation-like events, etc. So we would prefer the hypothesis that requires fewer assumptions, i.e. the fossil record. Maybe there’s a better hypothesis that would explain the data using fewer assumptions, but it’s probably not Satan burying fossils – unless we start seeing direct evidence of the existence of an evil being with orange eyes who goes about with dinosaur bones in his pockets and creates mischief.

    For the third case, you have to talk to physicists about which requires fewer assumptions. I know many view MWI as only assuming one thing – that each collapse of the wave function creates a branching path. Not sure that you can collapse that down to only “one” assumption, since it seems to imply other assumptions similar to the Satan example above. You’ve also got to assume the other paths cannot interact with our own path. You also have to assume there’s ‘space’ for all these universes, that this space cannot be observed otherwise, etc. (I know the concept of space here is not the same as within the space-time of the current observable universe, but then again that universe is observable, so anything outside it is necessarily assumed.) Perhaps the problem here is that we haven’t identified a good competing hypothesis capable of explaining the rest of the data using fewer assumptions. That doesn’t make MWI true, just the preferred explanation for why superimposed states can collapse to a single state. Where did the other state go? MWI answers that question. (Again, physics is far afield for me, so I’m sure others here will correct me.)

    Finally, this is all just about hypothesis generation. Which is all fine and good, but it’s philosophy not science. Science is the branch of philosophy which requires the testing of hypotheses. We might prefer MWI as a hypothesis because it explains our observations, but we don’t start to have confidence in our hypotheses until we test the predictions they make. Paleontology makes predictions about future fossils, patterns of fossils, etc. We can test those predictions, and as we do we gain more and more confidence in our hypothesis. This is why we think of paleontology a little different from how we think of MWI, even though both are good philosophy.

    • doubleunplussed says:

      that each collapse of the wave function creates a branching path.

      This is not what MWI assumes. Rather, MWI proponents say that the branching structure that regular, uncontroversial quantum mechanics already predicts entangled systems naturally evolve into *creates the illusion of collapse* when it happens to complicated, many-particle systems, even though no collapse actually occurs. The branching – which is really just blobs of amplitude getting far enough away from each other in the very high dimensional space of the configuration of all particles involved that they can’t interact anymore – is already there and can be verified by anyone familiar with the Schrodinger equation and how subsystems combine into larger systems in quantum mechanics. MWI just says that that branching *is* collapse. Copenhagen on the other hand jumps in as soon as the systems get too big and postulates that all but one of branches vanishes before it can fork off. That’s the collapse postulate. MWI says “Eell, you can’t observe all the other branches anyway, so why invent a rule that cuts them off? Maybe they’re just still there”

      Just to reiterate, MIW does away with collapse, it says that apparent collapse is already there in the theory without having to explicitly add it – it is an emergent phenomenon of regular quantum mechanics. The main criticism of it is that if its true, then although us humans kind of split up and take many branches, we subjectively experience the path we take through them to occur with probability given by the Born rule, which has not been derived in the context of MWI to the satisfaction of MWI’s critics (though it has to the satisfaction of some of its proponents – this remains controversial and I swear I’ll read the derivations and decide for myself one day).

      • sclmlw says:

        Sorry, I know use of the word “collapse” is technically incorrect when talking about MWI, and that was an error in language on my part. I know the phenomenon referred to as collapse by others is explained in a different way by MWI, such that it preserves all the other branching paths. Thank you for the correction.

        Or at least, I think I know that, since this isn’t really my field so all I can speak to is the philosophy of science behind it. As a hypothesis MWI is interesting philosophy, and it may well be an accurate representation of the universe. But it’s not strictly science, since we can’t test it empirically. “Non-empirical science” is a contradiction in terms. Either we can test it empirically, or we can’t and all we can do is debate its internal logic philosophically. MWI proponents and detractors all focus on whether it has perceived benefits or holes philosophically for a reason: it can’t currently be tested.

        Is it even possible to test it?

        • Harry Maurice Johnston says:

          I think the point is that we can’t test the conventional interpretation, either.

          The underlying problem is: given two theories which produce identical predictions, but representing different underlying realities, which should you prefer?

          • sclmlw says:

            Good point. Occam’s razor would have us prefer the hypothesis that requires us to make the fewest assumptions. I’m not a physicist, so I don’t know whether MWI is better than, say, Bohmian mechanics, or a number of other QM interpretations out there on that front.

            My point is simply that this is a philosophical preference, not a scientific one. MWI is not a scientifically testable hypothesis, and therefore the question about whether it should be preferred against competing hypotheses is a philosophical debate, not one about which we can have any confidence unless we find a way to test it.

            We can’t achieve confidence in a model through philosophical debate alone. All we can do is determine whether it’s logically internally consistent, and whether it requires fewer logical leaps than other candidates. After all that we still have no confidence in an untested model – no matter how great the math works out. Scientists have been burned on this too many times before.

    • Vermora says:

      You also have to assume there’s ‘space’ for all these universes

      My understanding is that there doesn’t have to be any extra space. When a particle in two separate states interacts with a human observer, you get a human observer in two separate states. But if you do the math, you find that the two different states of this observers can’t interact with each other. Electromagnetically, gravitationally, any sort of interaction. So there’s no reason they can’t occupy the same physical space.

      • TheAncientGeeksTAG says:

        If the observers are in a coherent superposition of states , then they can interact–such interactions (properly, interference) are our evidence for the existence of coherent states. Trying to get non-ineteracting worlds out of the math is a major and contentious issue, and it’s not clear that you can do that with SWE evolution alone — and the simplicity argument insists that you are doing everything with SWE evolution.

  44. Sok Puppette says:

    It’s not that not believing in the supernova is simpler. It’s that the supernova question is literally meaningless.

    The supernova can never have any effect on your observations, even in principal. It’s outside of your light cone. If you can’t ever observe any consequence of something, you can’t possibly know about it. Which means there’s no point in even having an opinion about whether it’s “true”, and in fact the most practical approach is simply to say that it doesn’t even have a truth value. There’s no important sense in which the supernova can exist for you AT ALL. It’s nonsensical to even talk about whether it does or doesn’t happen. So tacking it onto your explanations is just a silly waste of time.

    The devil thing might be testable. But if it’s not testable, that means that the devil is perfect in creating the same consequence as the dinosaur theory (at least up to the limits of your ability to notice any errors). In order to be perfect, the devil has to avoid creating any unrelated observable consequences for you. So the devil again has no observable consequences and no truth value.

    On the practical side, if you want to predict what you will observe, you will also have to perfectly apply the dinosaur theory. So your task is reduced to the task of fully understanding the dinosaur theory (to the best of your ability).

    Those are just the limits you have to live with. You can’t know that the Universe didn’t pop into existence last Thursday with all the “history” baked in… but, precisely because you can’t know, it’s pointless to act as if it even means anything to say that.

    You can always choose the simplest explanation; that’s convenient, it’s aesthetically satisfying, and, when there is a testable difference between two explanations, the simpler one often seems to end up making better predictions in practice. But if there isn’t a difference in prediction, then the reason for not tacking untestable and undecidable irrelevancies onto your explanations of the world is that they don’t actually explain anything. They’re just ugly, impractical, time-wasting irrelevancies.

    The same applies to MWI versus Copenhagen, at least until somebody identifies some potentially observable consequence of the difference… which as I understand it can’t happen. The universe you observe obeys certain regularities. Both views explain those regularities. One of them may be more intellectually satisfying, or simpler to calculate with, but you have no empirical basis for preferring one over the other, and no epistemological basis either, unless you literally think beauty is truth.

    Nothing can ever be true or false until you can test it… so let’s just stop even talking about the absolute “truth” or “falsity” of things that aren’t testable.

    You can however, treat any part of a theory that you can’t test as false for practical purposes, since that enables you to discount it in your future thinking, thus saving a lot of time and effort. Maybe the problem with putting that stuff into a “scientific” explanation is that it’s rude because it wastes other people’s attention.

    You can’t say “the wavefunction collapses”, or “all branches continue to evolve”. You can say “you will observe what looks like wavefunction collapse, but the description of the system is simpler if you assume that all branches continue to evolve and you only observe one of them”. Or something similar to that.

    • arbitraryvalue says:

      If you can’t ever observe any consequence of something, you can’t possibly know about it. Which means there’s no point in even having an opinion about whether it’s “true”, and in fact the most practical approach is simply to say that it doesn’t even have a truth value. There’s no important sense in which the supernova can exist for you AT ALL. It’s nonsensical to even talk about whether it does or doesn’t happen. So tacking it onto your explanations is just a silly waste of time.

      By this logic, as far as I am concerned the universe will no longer exist in any important sense after I die. On the other hand, if we reject solipsism (at least for the sake of argument) then why divide entities that are outside my light cone into those that would be in my light cone if I were immortal and those that still wouldn’t?

  45. Johnny4 says:

    I agree that we’re allowed to prefer the MWI interpretation without having to worry about it being untestable. But the fact that we’re allowed to prefer a theory, given one constraint on theory choice, really isn’t saying much. Like some of the others above/below I’d be interested in hearing about *why* we should prefer this theory: specifically I’d like to know more about “splitting” and the identities of things, most importantly persons, between “worlds”. I mean, maybe there’s some good explainer that I don’t know about, but the MWI seems to have lots of problems, although maybe they’re problems physicists don’t care about. But, e.g., it looks like the population (of persons) will be infinite on MWI, and if that’s true standard forms of consequentialism break down (since no choice changes the amount of net good).

  46. theifin says:

    The theory that “when you mix an acid and a base, you get salt and water, and a star beyond the cosmological event horizon goes supernova” does make testable predictions: it predicts that the supernova rate within our light cone is a function of the rate at which acids and bases have been mixed beyond the cosmological event horizon. Further, it predicts that supernova rates within our light cone should have increased sharply at around the time in the universe’s history when acid and base molecules first started condensing out of interstellar dust.

    In the second story, it’s not the case that the two theories “paleontology” and “the Devil mimicking paleontology” make “identical predictions”. Only paleontology is actually making predictions here: for the Devil to trick us s/he must look at the predictions of paleontology, and use those predictions to plant fake fossils that are consistent with those predictions. The two theories being compared are thus “predictive mechanisms of paleontology” versus “predictive mechanisms of paleontology + The Devil”. The more complex theory is the second, because it includes the first and adds an extra entity.

    • HeelBearCub says:

      Come to think of it, what is the rate of acids and bases mixing in the universe? Shouldn’t the entire universe consist of supernovae?

      • theifin says:

        except that: each supernova reduces molecules in its neighbourhood to their constituent parts (with some probability) and so each supernova removes a large number of acids and bases from the universe – there is a regulatory effect leading to a steady state.

        Under this theory, if there are a lot of supernovae near you, you should start mixing acids and bases on an industrial scale: that will reduce the number of acids and bases outside your cosmological event horizon (blowing them up via supernova) and so reduce the number of supernovas that you see in your neighbourhood.

  47. HeelBearCub says:

    Point 2 is not good and doesn’t advance the argument you are trying to make. If we substitute “simulation” for “Devil” you won’t blink in accepting it as a possibility.

    What distinguishes the Devil is exactly the thing you are trying to avoid, the testable predictions made by proponents of the Devil theory.

    • Taleuntum says:

      I will also accept the Devil theory, if you can convince me that there is a high probability that the civilizations in the universe make a very high number of civilizations-with-fossil-planting-Devils, ie: There are arguments for simulation other than it explains fossils.

      Furthermore, I hope I can decouple the Devil-planting-fossils theory from other testable predictions concerning the Devil. If choosing the Devil annoys you, You can replace it with “George” and Scott’s point still stands.

    • Robert Jones says:

      Occam’s Razor just isn’t the slam-dunk to the satanic fossils theory that Scott would like it to be. As somebody pointed out on the other thread, the correct objection is that is that it’s a fully general explanation, which not only explains the actual obervations and any conceivable future observations, but which would also explain any counterfactual observations. It’s like “maybe I’m a brain in a vat” or “maybe God created the world 5 minutes ago (with the full set of memories and historical records we observe)”. Such suggestions can’t be refuted and close down the possibility of any further discussion. I don’t think we can do anything other shrug, say “maybe” and carry on.

      In fact there is also a theological objection to Satan planting fossils, which is that it ascribes creative power to Satan, which is heretical, but that of course is outside the present domain of argument.

  48. Dan says:

    “The universe was created in seven days by an all-powerful God who exists outside of time and who created Man in His image and then Satan created dinosaur bones” is a simpler explanation, Occam’s-Razor-wise, than “There are three generations of quarks and three generations of leptons that interact via four fundamental forces mediated by gauge bosons, and… and then humans evolved through a million-year series of coincidences that made sense at the time (which also involved dinosaurs) and…”

    The problem with the God/Satan theory is that when you try to fit in all of the available evidence, it’s clear that God/Satan is the unnecessary entity: “God created three generations of quarks and three generations of leptons and created Man in His own image but made it look like there had been a series of millions of years of coincidences” is a terrible theory.

    And likewise, “there are infinite parallel universes but we only ever see one that is consistent with Born’s rule” seems to be adding infinite universes unnecessarily, if you can’t explain where Born’s rule is coming from.

    • ec429 says:

      if you can’t explain where Born’s rule is coming from.

      Note that collapse interpretations don’t explain where it comes from either, they just hard-code it into their collapse postulates.

      (And as other commenters have stated in other subthreads, “there are infinite parallel universes” is a really misleading characterisation of MWI. There is one universe, the wavefunction; the world looks classical to us because we are ourselves quantum objects. The problems only come if you think a “universe” is something made of billiard balls bopping around, which Bell already tells us can’t be the right picture.)

      • The Nybbler says:

        And as other commenters have stated in other subthreads, “there are infinite parallel universes” is a really misleading characterisation of MWI

        It’s right there in the name. Many Worlds Interpretation.

        which Bell already tells us can’t be the right picture.

        Eh, under MWI there are versions of me who live in a world where just by chance, Bell’s inequality is never violated.

        • ec429 says:

          And as other commentaters have stated in other subthreads, Many Worlds is a really misleading name, which unfortunately we seem to be stuck with for stupid path-dependent reasons. (If only there were some histories that produced the same name but were half a wavelength longer!)

          under MWI there are versions of me who live in a world where just by chance, Bell’s inequality is never violated.

          I fail to see the significance of this.

  49. Thegnskald says:

    MWI, or something like it, falls out of the Copernican Principle; it is just another way we don’t occupy a special place.

    And the Born Rule, or something like it, will fall out naturally sometime later, when the idea of quantization is supplanted with actual understanding. Also Copernican Principle there. Really it is staggering how special we still think our place in the universe is.

    But right now these sorts of arguments are… unproductive.

    • rahien.din says:

      “Our every choice spawns an entire universe” seems far more vulnerable to the Copernican Principle than “We don’t understand why particles act so strangely.”

      • Thegnskald says:

        No universes are spawned at all, and certainly not by “choices”.

        Matrix mechanics may be more to your liking, though. It is undergoing a minor revival lately as a weird sort of Bell-compatible hidden variables approach.

        • rahien.din says:

          What an unusual response. “No universes are spawned at all” is either incorrect or defensively pedantic. “certainly not by ‘choices'” seems to imply that choices are not themselves events.

          Or maybe there is more you can offer to the layperson?

          • Thegnskald says:

            The idea of universes spawning is, as described elsewhere here, a metaphor, and a misleading one. It is more like… a wave splitting on a rock. You can find yourself on the left side or the right side. A whole new wave isn’t created.

            Even that metaphor is misleading, however, because the universe isn’t one wave, and it doesn’t split like that. There aren’t two new universes after a 50/50 event, there is a complex superposition.

            Shifting to Schrodinger’s cat, imagine a physicist in a box containing the cat in a box. The physicist opens the box containing the cat. You wait ten minutes to open the box containing the physicist. During that ten minutes, the physicist is in a superposition, just as the cat was; the whole universe doesn’t split as soon as the box containing the cat is opened. It likewise doesn’t split when the box containing the physicist is opened. Rather, there is a superposition of possibilities that only resolves locally.

          • rahien.din says:

            That description is so much more elegant and so much more believable.

            Thank you for your patience with me.

            Simultaneously, it seems disingenuous that someone ever decided to name this idea “many worlds.” Sucks to be saddled with that moniker…

          • Thegnskald says:

            I think it was less misleading in the original societal context, but I don’t really know.

            It took trying to understand pilot waves before I could really “got” what MWI was trying to describe; in particular someone describing the difference between the two ideas.

            The biggest part of it I struggle with now is non-interference; in particular, what exactly separates the superpositions. I can conceptualize a dimension on which they are separated, but I don’t quite get why they are separated, which I think may be important.

            ETA: Unless they aren’t separated, and are instead just canceling out with respect to the “local” superposition. I think that works. It has been a while since I’ve refreshed my brain on that point though.

  50. meh says:

    digging into the sequences again?

    for reference for anyone who missed it
    https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor

  51. benf says:

    The problem with the many-world’s interpretation is not that it doesn’t make any testable predictions. It’s that the testable prediction it makes is so utterly nonsensical that it has to be redefined into a safely untestable definition.

    “All other branches of the wavefunction are just as real as the one we happen to find ourselves in! And by ‘just as real’ I mean ‘not real in any common sense of the word, but in another sense that is vague and undefinable'”.

    • JPNunez says:

      Yeah, it seems to be indistinguishable from theories that prefer non localism, so I think we are at a dead end.

      It’s useless to prefer one or the other and it’s only done for aesthetical reasons, so we gotta wait until a better theory comes along.

  52. rahien.din says:

    Let’s play some games.

    The first game is called “Salt Explosion.” Two teams compete to predict whether adding an acid to a base will produce a salt. Team A predicts that it will, every time. Team B also predicts this, but also that the reaction is accompanied by an unobservable supernova. Every round of the game is a tie.

    The second game is called “Dinosaur Family.” Two teams compete to assemble a paleobiological framework from a complex set of fossils and geologic data. Team A believes that these fossils and data are real parts of the world. Team B believes instead that it’s all simulated, though with perfect fidelity. Every round of this game is a tie, too.

    The third game is called “Flame Chaser.” Two teams compete to predict whether a candle flame will vanish when blown on. Team A believes that the candle flame stops existing once it is blown out. Team B believes that the candle flame continues to exist, but just goes somewhere very distant. Every round of this game is a tie, too.

    After yet another rousing round of Flame Chaser, a B-team player says to their opponent, “You realize that candle flames never stop existing, they just leap to distant candles.” The A-team player replies, “Isn’t that a property of the candle?”

  53. niohiki says:

    But being indifferent between ‘wavefunction branches’ and ‘wavefunction branches, and then somewhere we can’t see it one branch mysteriously collapses’

    This is not the case. @smack mentioned quite appropriately in the previous post the Born rule, and it is mentioned here again by @Harry Maurice Johnston and implied by @knzhou (plus I add my support). It really ought to be a central part of the discussion. Either way, there’s something happening somewhere, mysteriously, making us experience only one “branch” (I don’t really like this name, because it makes people think, like I have read in some other answers, that somehow the universe duplicates or something), by a mechanism we don’t understand. Whether it makes the rest of the “branches” disappear or stay…

    Personally, I have the feeling that putting under the rug the emergence of Born’s rule in a MW context from some yet undiscovered physics is quite more reasonable (well, as reasonable as it can be) than the “observers are inherently different from the rest of matter and they cause wavefunction collapse because… souls”. For instance, for a kind of interesting take of could happen if one takes the Copenhagen position truly seriously, read Quarantine, by Greg Egan. You will see how weird it is. (Although it can also serve to highlight how weird MW is.)

    But that is my feeling of what physics will eventually find out if the “unreasonable effectiveness” of mathematics stays as it has up to today. That is all we can have for now.

    On the other hand, I do not expect the question will be out of the realm of testing. If -for instance- we find out a more fundamental physical theory which has among its consequences a derivation of Born’s rule in its effective QM limit, then MW would indeed become the Occam’s-razor-sense simplest theory.

    (@Harry Maurice Johnston has the hope of such fundamental discoveries to come from quantum computing, which while I may not be so optimistic about, I agree it would be indeed maximally awesome.)

    • smack says:

      Thanks for the post and the reference, which (along with Harry’s and knzhou’s) kept me from having to argue that again. : )

      I think I have the opposite intuition between the two, particularly since most Copenhagenists I know will insist that they do *not* believe in any special properties of observers. But I think most of all, I think that neither one seems reasonable at all, and that while it’s fun and perhaps fruitful to discuss these possibilities in the meantime, we should shut up with the dogmatism until we actually understand better what’s going on, which will take time and theorizing and, yes, certainly more experiments. (There’s no accusation intended with the “we,” this time. ; ) I mean the whole community.)

      Both sides are desperate to “prove” using some argument that theirs is the One True Interpretation and the other could only be held by fools. I have my own opinions, but a reasonable view seems to me to be that both interpretations (and all the others) have flaws that a reasonable and philosophically sophisticated appraiser could well conclude are fatal, so a little humility would go a long way. In the meantime, unitary evolution + Born’s rule keeps giving good predictions….

  54. Tuesday says:

    Personally, I prefer the Many-Worlds Interpretation because it makes “quantum immortality” a real possibility, which is very interesting to think about (as long as you don’t think about it too much, then it becomes somewhat nightmarish).

    • Murphy says:

      my issues with this are twofold.

      First, infinity doesn’t imply all possible patterns.

      Pi appears to be somewhat random but pick a specific 10000 long sequence of digits and there’s no guaratee that it will definitely turn up somewhere if you go deep enough into Pi.

      Pick a really long sequence of digits to represent the state of your brain and there’s no law of the universe that says it will turn up in Pi.

      Plus many-worlds is universe heavy…. but there’s a difference between infinity and merely very very very large.

      Even if every cubic cm of space produced an Xkcd number of additional universes every picosecond and had done so for the last 14 billion years and each one of them did the same…. you could still assign every universe in the many-worlds to a room in the infinity hotel with a finite number on the door. (very big doors though to fit the numbers)

      • Thegnskald says:

        There is no requirement that a given pattern be contiguous, in either space or time. Not does any part of the pattern need to be uniquely represented.
        For a given 100,000 long sequence of digits, all that is necessary is that each digit appear in pi somewhere.

        Additionally, it doesn’t need to “store” your entire brain, just the portion necessary for continuous experience/awareness.

        ETA: Most of your futures are “minds” observing a loop of infinite static. Nightmarish, yes. Hopefully we will eventually be able to devise a mathematical proof of all possible end states of conscious minds and provide an exit.

        • Thegnskald says:

          Speaking of nightmarish thoughts, if our brains DO depend on quantum processes, too many quantum observations might result in a form of dementia.

          ETA: Contagious dementia, since if you observe it, you are in a superposition without enough amplitude.

      • zzzzort says:

        Pi appears to be somewhat random but pick a specific 10000 long sequence of digits and there’s no guaratee that it will definitely turn up somewhere if you go deep enough into Pi.

        I somewhat agree with your overall point, but this example is very much an open problem. Furthermore, you would be able to find a specific 10000 long sequence in the vast majority of numbers (that is, most reals are normal).

  55. Robert Jones says:

    But being indifferent between ‘wavefunction branches’ and ‘wavefunction branches, and then somewhere we can’t see it one branch mysteriously collapses’ is the same kind of error as being indifferent between ‘acid and base make salt’ and ‘acid and base make salt and water, and then somewhere we can’t see it a star mysteriously goes supernova’.

    The wavefunction doesn’t ‘branch’. The wavefunction is, well, a wave function, returning an amplitude for any co-ordinates in space and time. This is a key problem for MWI: saying that measurements cause a branching event seems no better than saying that measurements cause wavefunction collapse.

    • Harry Maurice Johnston says:

      You may be missing an important point there – when we talk about many-worlds we are of necessity talking about a quantum field theory or worse, so the wavefunction is a much much more complex object than the single-particle version you’re talking about. And not to harp on about this, but the math really does predict that a measurement will cause a branching event – it’s a consequence of quantum theory, not an additional postulate.

      • Robert Jones says:

        I agree that the wavefunction is more complex, but I don’t see that this makes any fundamental difference. It’s been a long time since I studied QFT, but I just don’t think you’re right. I’ve checked the index of a QFT textbook I have here, and there’s no entry for branching (and the only entry for measurement refers to an analysis showing that no measurement in Klein-Gordon theory can affect another measurement outside the light-cone, which I don’t think is relevant). You’re going to have show me maths, I’m afraid.

        • Harry Maurice Johnston says:

          OK, I have to admit that I haven’t tried to personally verify the validity of Everett’s work, but I also have to assume that if the maths were flawed that fact would by now be well-known.

          Check me on this, though: single-particle QM can’t replicate even simple decoherence, right? I mean like the difference between a laser and an ordinary light bulb. Since many-worlds requires decoherence, you can’t expect to see it in single-particle QM.

          It is of course perfectly reasonable for a QFT textbook to use the Copenhagen interpretation, which is after all both the conventional position and far more suitable from a pedagogical standpoint.

          • Robert Jones says:

            You’re right that single-particle QM can’t include decoherence. However, Everett didn’t draw on the concept of decoherence (and couldn’t have done, since he was writing 13 years before the concept originated), so I don’t think you’re right that many-worlds requires decoherence. Wikipedia saith, “Provided the theory is linear with respect to the wavefunction, the exact form of the quantum dynamics modelled, be it the non-relativistic Schrödinger equation, relativistic quantum field theory or some form of quantum gravity or string theory, does not alter the validity of MWI since MWI is a metatheory applicable to all linear quantum theories.”

            I refer to Everett’s thesis, which I downloaded here (although I believe that is not original front page). I think the relevant section is “IV. Observation” which begins on page 63 (as numbered).

            The first point is that Everett introduces observers. The observers themselves are not predicted by the theory. Secondly, Everett requires that observations have certain properties, namely that an observation preserves the eigenstates of the observed system and changes the state of the observer in accordance with the eigenvalues.

            By page 78, he gets to:

            In conclusion, we have described in this section processes involving an idealized observer, processes which are entirely deterministic and con­
            tinuous from the over-all viewpoint (the total state function is presumed to satisfy a wave equation at all times) but whose result is a superposition,
            each element of which describes the observer with a different memory state. We have seen that in almost all of these observer states it appears to the observer that the probabilistic aspects of the usual form of quantum
            theory are valid. We have thus seen how pure wave mechanics, without any initial probability assertions, can lead to these notions on a subjec­tive level, as appearances to observers.

            From a quick look, the maths seems ok, but aren’t the requirements question begging? Why should observations have those properties? It seems to me that this is “solving” the measurement problem by postulating that measurements happen as required.

          • Thegnskald says:

            Everett might make more sense if you think of measurements as determining the position of the observer, rather than the observed.

            And not through any special property of either observer or observed, although this is hard to communicate meaningfully.

          • Harry Maurice Johnston says:

            @Robert,

            I stand corrected; thank you.

            I can’t even argue that the physics community would have noticed by now if nobody had actually done the math necessary to show that MWI is plausible. Not with a straight face, at any rate; the physics community apparently firmly believes that classical particles have different statistics depending on whether they are indistinguishable or merely identical, and I’m almost certain there’s no mathematical grounding for that. (I hadn’t realized how many people with a physics background are here; I must ask about this one next open thread.)

            I might fall back on the assumption that Elizier, at least, would have noticed, but that’s sounding pretty weak even to me. 🙂

            So I dunno. I’ll try to post something later today about the decoherence math that I think I do personally understand – with the proviso that I suppose I’m only guessing that this is at all relevant to the problem of measurement.

          • nadbor says:

            the physics community apparently firmly believes that classical particles have different statistics depending on whether they are indistinguishable or merely identical, and I’m almost certain there’s no mathematical grounding for that

            Did you deliberately write ‘classical’ – as in non-QM? If so, then why would anyone believe such a thing? And what does it matter anyway, since there aren’t any non-QM particles around.

            Or are you saying that this statistics thing is not even true in QM? In that case, could you please explain? Honest question. I thought that was pretty basic fact from statistical mechanics but it’s been a decade since I studied it and I could easily be mistaken.

          • Harry Maurice Johnston says:

            @nadbor, yes, classical. Newton’s equations. The context in which this came up was an answer I had posted which contrasted QM to the classical picture, I had said something like, “you can do the same thing [indistinguishable particles] in classical mechanics but in that case it doesn’t affect the mechanics” and got shouted at.

          • Harry Maurice Johnston says:

            @Robert, sorry for the delay. I was hoping to review my thesis to refresh my memory, but that’s clearly not going to happen anytime soon, so I’ll present what I can remember but this should be taken sceptically.

            Anyway, there’s a standard technique in quantum optics. Quantum optics is an experimental field as well as a theoretical one, so my belief is that this technique has been tested against experiment, but I should emphasize that I have no personal knowledge on that question.

            You’ve got a quantum system of interest, say light travelling through a crystal, and you want to model decoherence. You can do this by allowing the photons to interact with the microscopic vibrations in the crystal (i.e., heat?) which we model as phonons. The vibrations are random, and it is a good approximation to assume that the light doesn’t affect them much, so we consider them to be a thermal bath.

            The idea is then to integrate over the states of the thermal bath, which converts the wavefunction into a density matrix. If you start with a density matrix representing a pure state, it will evolve into a mixed state in a smooth way as the light interacts with the thermal bath.

            If you don’t look too closely, this sounds a lot like a measurement: you’ve started with a pure state and wound up with a mixed one. At first glance it even looks like you might have derived the Born rule, but you’re implicitly using the Born rule to interpret the meaning of the density matrix, so that’s circular reasoning. (Another point I’d overlooked until reading the sequences!)

            It still seems to me like thermodynamics should be playing some sort of role in measurement. In MWI, at least, there’s an assumption that after a measurement you can’t wind back the two branches to interfere with one another again, and the only sensible way I can see to justify that would be is that the process of measurement is expected to be thermodynamically irreversible.

            That’s also my objection to this paper, but on the other hand the fact that nobody else seems to have raised this objection indicates that my ideas shouldn’t be taken seriously.

  56. jmcb says:

    Never has the phrase “shut up and calculate” ringed so true.

    • benf says:

      “Shut up and calculate” is great if you’re trying to become an electrical engineer. If you’re trying to understand reality, the calculations are a little bit beside the point. Einstein didn’t calculate his way to relativity theory. If you want to build a functioning satellite, you’ll need to do some math, but if you want to understand the nature of time and space, learning the equations can just as easily be a dead end as an enlightening exercise.

      • zzzzort says:

        Hard disagree (for most values of ‘understand reality’). Most practicing particle physicists I know are firmly in the shut up and calculate camp, and address questions such as what if there was a two-higgs doublet, or how did baryogenesis happen, or what does dark matter consist of just fine. The fact that many worlds and copenhagen make the same predictions mean you really truly don’t need to choose.

        • chrisminor0008 says:

          But they don’t make the same predictions. Copenhagen doesn’t make any predictions at all because it doesn’t give you an algorithmic description for when a measurement takes place. If you try to actually pin this down, you will get a recipe for an experiment that will distinguish Copenhagen from Many Worlds.

          • zzzzort says:

            Copenhagen doesn’t make any predictions at all

            Ok, if we’re being pedantic about predictions following from a specified model, you seem to be working in a model where QM made no predictions at all before 1957. This conflicts with available data, so I would suggest you revise your model.

  57. HaraldN says:

    I have a question regarding many worlds, namely how it handles things we know from quantum field theory.

    To wit:
    In quantum field theory, when we calculate something, say the mass of a particle, or it’s chance to interact with another particle we consider an infinite series of terms, each representable by a feynman diagram. So an electron absorbing and reemitting a photon would have dominating term of ‘electron absorbs photon, then electron emits photon’ but would also have a term of ‘photon splits into electron-positron, then positron annhilates with incoming electron’. Your answer becomes more accurate the more preposterous chains of events you consider.

    But this is some kind of spooky ‘everything happens then reality somehow sums over the events to produce the answer’. As I understand multi-world the idea is that if you measure the spin of an electron, you will get both answers, for yous in different reality branches. But we get very consistent answers when measuring things like electrical charge, mass etc even though a consistently applied multi-world seems to say you should get one of possible event chains, not a weighted sum?

    I am probably confused about something, but I don’t know what.

    • Robert Jones says:

      I think you’re right that Feynman diagrams are suggestive of MWI, but I think that there’s a real danger that we’re being misled by a particular piece of analytical apparatus.

      The question for me is, “Why do we not observe the superposition?” That may sound naive, but if the wavefunction is the real thing, in the Feynman-esque everything happens sense, then the other universes are not really other universes, as they’re all part of the same wavefunction. So why are we isolated from them? The whole point is that we resolve the diagram by summing over all possible ‘universes’. The answer must be ‘decoherence’, but I’m just not able to get a handle on what that means. And I don’t really see how it could possibly mean “other possibilities go off to become their own universes”.

      • Harry Maurice Johnston says:

        Well, the different branches are definitely just isolated parts of the same universe, from the viewpoint of a hypothetical objective outside observer. They are sufficiently isolated from one another that they are to all effects and purposes just like separate universes from the point of view of an observer living inside one branch or another – but to say they actually are separate universes isn’t anything more than a metaphor IMO.

        And yeah, I’m not sure I’ve got a handle on how decoherence really works either, at least not enough of one to try to explain it. I used to be able to do the math, but that was a long time ago.

        [ETA: and for that matter, the words “parts” in my first sentence isn’t much more than a metaphor either; as one important example, the total expectation energy of the system is the average, not the sum, of the expectation energy of the various so-called “isolated parts”.]

        • niohiki says:

          Well, honestly, most of the time that people invoke “decoherence” they actually want to say “and somehow Born’s rule happens”.

          For example, we totally get decoherence of a bunch of spins in a heat bath. Now, how does that imply that repeated measurements of a macroscopical observer over the same experimental state place said observer in either result with precisely the frequencies given by the rule…

          • Harry Maurice Johnston says:

            Well, you can have a practical need to model decoherence without necessarily expecting it to give you Born’s rule out of nowhere – the latter can just be a postulate, same as it usually is. That’s pretty much the situation I was in back in the day, modelling electrons in a regime where both quantum effects and decoherence were relevant.

        • Robert Jones says:

          to say they actually are separate universes isn’t anything more than a metaphor IMO

          I think this is probably right in Everett’s original conception, not least since he didn’t refer to multiple worlds or universes. I’m not at all sure that this is right in discussions of MWI generally. The point being made in section I of the top post seems to be that there is a fact of the matter about whether supernovae are ocurring beyond the cosmological event horizon (of which I am less sure than Scott, but never mind). I’m reading the analogy as saying that there is equally a fact of the matter about the other universes, which doesn’t sound like they’re viewed as metaphorical.

          If MWI is just saying that the wavefunction is the real thing, I’m much more sympathetic to it, although I still think it has difficulty explaining why we observe the universe as we do. Or to be more precise, how did it happen that the universe contains creatures which observe the universe as we do? This is probably closely linked to your question as to how we derive Born’s rule.

          • Harry Maurice Johnston says:

            Darn, another ambiguity. I wasn’t trying to say that the continued existence of the other branches is considered to be metaphorical; I was trying to say that the idea that calling a different part of the objective wavefunction a “separate universe” is metaphorical. From the viewpoint of the hypothetical objective outside observer they’re just superpositions, after all.

          • Robert Jones says:

            I understood that.

      • benf says:

        Why don’t we observe the superposition? Because the superposition describes the probability of getting any particular result, but when you observe, you can only get ONE result.

        We understand probability perfectly well when we spin a roulette wheel, why is it so hard to accept that the same principle applies when measuring the spin of an electron? And, in a slightly more nutball take on the issue, why do we imagine that the probability function of an unspun roulette wheel and the probability function of the spin of an unmeasured electron are fundamentally different sorts of things? They’re both representations of the possible outcomes of future events, and they both “collapse” to one and only one outcome when those events happen. One requires more fancy math and one can afford to ignore the fancy math to simplify the calculations, but both have the same fundamental uncertainties.

        My conjecture is that the uncertainty about future macro events and the uncertainty about quantum measurements are not different things. They are the same thing. The quantum uncertainty is the wellspring of the macro uncertainty.

        • arbitraryvalue says:

          Physics started out with the assumption that there was only one kind of “uncertainty”, the kind you would have about the outcome of a roulette-wheel spin. Einstein, among others, tried very hard to prove quantum mechanics was compatible with this assumption. But as it turns out, quantum mechanics (regardless of which interpretation you choose) demands that this assumption must be rejected.

          • benf says:

            I don’t agree with this framing. Physics started out that the assumption that the roulette-wheel uncertainty was only a reflection of our lack of perfect knowledge of the system. Quantum mechanics illustrates that there’s no such thing as perfect knowledge of ANY system, even a roulette wheel. Our uncertainties about macro systems can be papered over and still be good enough to get on with our macro lives, but that’s not a discovery of a new, more fundamental kind of uncertainty; it’s actually a tautology involving the very definition of what a macro system IS. The macro world is, by definition, the domain of the extremely probable.

          • Viliam says:

            @benf
            Lack of knowledge does not destructively interfere with itself. Therefore “we can never get perfect knowledge” is not a sufficient explanation of quantum mechanics.

            For example, we can never know whether a photon would or wouldn’t arrive at a particular spot through slit A, and we can never know whether it would or wouldn’t arrive at the same spot through slit B, but despite that, we can know that if both slits are open, it cannot arrive at the spot.

            To predict the roulette wheel, you need real numbers. To predict quantum events, you need complex numbers. Again, this shows it is not the same thing.

          • benf says:

            @Viliam “To predict the roulette wheel, you need real numbers. To predict quantum events, you need complex numbers.”

            This is only approximately true. To predict the entire quantum system of the roulette wheel, you need to do the same quantum calculations.

        • chrisminor0008 says:

          @benf, You should probably read about Bell’s Inequality/Theorem. I think you’re advocating some variation of a hidden variables theory, and all ideas in this class have been thoroughly disproven.

          • benf says:

            Bell’s Inequality has nothing to do with this point and my point is perfectly compatible with bell’s theorem. There is no underlying hidden information about quantum states. The measurement you get really IS random.

    • Anatid says:

      In the double slit experiment, say you want to calculate the probability that the particle will hit point P on the screen. You could basically have two Feynman diagrams, one where the particle goes through slit 1 and then hits P, and one where the particle goes through slit 2 and then hits P. You would calculate a number for each diagram, add the numbers together, square the sum, and that would be the probability of the particle hitting P.

      In MWI, after the particle hits the screen there’s not “a world where the particle went through slit 1 and hit P”, and “a world where the particle went through 2 and hit P”, there’s just “a world where the particle hit P”, and to calculate its probability you sum over these two paths that lead up to it. Both Feynman diagrams contribute to the same world. So, Feynman diagrams don’t each describe a world. Rather, a whole set of Feynman diagrams all contribute to the amplitude of a single world.

      Anyway, I want to emphasize that really Feynman diagrams are computational tools in perturbation theory, which is just a way of *approximately* solving the Schrodinger equation and determining the amplitudes for things to happen. Feynman diagrams aren’t themselves real, and there are processes (for example, the binding of quarks together into protons and neutrons) for which the approximation given by perturbation theory is quite bad and so Feynman diagrams are useless, and totally different schemes are needed to calculate accurate numerical results. For example, with a good enough computer you can just directly solve the Schrodinger equation numerically to high accuracy to calculate your amplitudes. In that way of doing the calculation, nothing resembling a Feynman diagram appears at any step

  58. Harry Maurice Johnston says:

    I don’t think there’s really an extra assumption as such involved in the Copenhagen interpretation.

    In many worlds, you have wavefunction evolution plus Born’s rule.

    In Copenhagen, you have wavefunction evolution plus collapse, which incorporates Born’s rule.

    The problem with many worlds is that it makes Born’s rule hard to describe in objective terms, despite its critical importance to quantum mechanics as a useful theory; the problem with Copenhagen is that the process it describes seems physically implausible. It’s not a clear contest either way.

    • Harry Maurice Johnston says:

      … to my mind, the problem with the classic many-worlds theory is that it is perfectly linear, which seems both unrealistic and unlikely to allow for either a derivation of or even an objective description of Born’s rule. The problem with Copenhagen is that the necessary non-linearity is introduced in a ham-handed way. A better theory than either would introduce some explicit small source of nonlinearity and allow you to deduce Born’s rule as a consequence. (Sadly I have no pragmatic ideas of how one might go about building such a theory. I’m vaguely hopeful that quantum computing will eventually give us some experimental evidence to guide us in the right direction, though this is probably just wishful thinking.)

      Ironically enough, I had casually assumed that many-worlds was obviously true for years and years, until one day I read the sequences. Elizier, in the process of arguing for many-worlds, made clear to me the fatal flaw in the idea that I’d somehow managed to previously overlook. I disagree with his conclusions, obviously, but there’s still something amazing in that.

      • eigenmoon says:

        What’s the problem with perfect linearity? Why it’s unrealistic and conflicting with the Born rule?

        • Harry Maurice Johnston says:

          To spell this out explicitly, when I say things like “seems unrealistic” and “unlikely” I’m talking about intuition, and I don’t expect anyone to take it at all seriously.

          But given that caveat, it seems to me that in order to predict the Born rule you would need to be able to count something – the number of different versions of you that see each of the possible results, perhaps, but at any rate something – and I don’t see how you could conceivably do that without some principled way of saying how you should divide up the objective global wavefunction into parts.

          … and in a perfectly linear system, there can’t be any principled way of separating the wavefunction up into parts; any possible separation is equally valid. Want a flying spaghetti monster world? Write out the wavefunction for it, call it A, divide the objective wavefunction of the universe up into A and everything that’s left over, and your work is done. Want FSM to be a more likely world than any of the other ones? Divide the wavefunction of the universe up into n copies of A/n, plus whatever is left over, and then it is just a matter of making n big enough.

          If we could add even a little bit of nonlinearity then only some ways of dividing up the wavefunction would work properly. In order for a FSM world to evolve in the way the residents would expect, the “whatever is left over” bit of the wavefunction would have to be given an evolution equation with an explicit FSM term in it, and the whole thing falls over. And once you have a natural “best” way of dividing up the wavefunction you might (maybe!) be able to make a counting argument work.

          Of course, there might be some other completely different way of resolving the problem of the Born rule, that doesn’t require any nonlinearity at all. I just can’t, for my part, imagine what that could possibly be.

          [And I’m painfully aware at this point that I’ve started arguing apparently diametrically opposed positions on different threads; I won’t try to justify that in depth right now, partly because I should have been asleep two hours ago, but I will say that I think it important to appreciate the ways in which MWI is mathematically reasonable before quibbling about why perhaps it isn’t entirely reasonable after all.]

          • eigenmoon says:

            Thanks, I think I see what you mean now. I agree with you that the Born rule might not be possible to derive from the wavefunction math. That said, I would not go as far as saying that linearity is unrealistic or disallows objective description of Born’s rule.

            I’m going to defend the somewhat heretical notion that the mathematical model alone is insufficient to lead us to subjective probabilities even if it completely predicts everything that happens. Consider a Turing machine that works on a ribbon of 0s and 1s. We need it to be reversible though (like the quantum physics), so let’s say that the caret holds a symbol (initially 0) and the only modification that the machine can do is to exchange the symbol on the caret with the symbol in the current cell. This is still Turing-complete and I’ll save space by not explaining why.

            Now imagine that we make an AI and ask it the following: “You’re run simultaneously on two Turing machines, the first is [as described above], and the second is the same except with 2s instead of 1s. What’s your subjective probability that you’re in the second one?”

            Here – contrary to the usual scientific intuition – we (and the AI) must ask what’s really under the hood. How is the computation actually done and what is this “2”? If this “2” is represented by a $2 bill and “1” was represented by a $1 bill, the computation is pretty much the same and the answer is 1/2. But if “2” is represented by two heavy crates and the poor employee that lives on the caret has to go out twice in order to transfer a “2” symbol, then the computation is essentially repeated twice, so the answer must be 2/3.

            And for an AI that’s pretty sure that its subjective probability is proportional to the number that represents the nonzero bit, it’s quite reasonable to conclude that whatever is evaluating it must spend computational power proportional to that number. This is entirely orthogonal to the linearity of the Turing machine itself.

          • pseudonymous says:

            Divide the wavefunction of the universe up into n copies of A/n, plus whatever is left over, and then it is just a matter of making n big enough.

            It doesn’t really work that way, though. The Born Rule only applies when the states in question are orthogonal, so you if you treat state A as n times A/n, you can’t apply the born rule on it (since A/n cannot be orthogonal to A/n).

            And Born Rule is, in fact, the system of collapse most consistent with linearity—since transforms in QM are unitary, you can’t have a system that says that the probabilities are proportional to, say, the amplitudes themselves rather than their squares—that is the case when you’ll actually be able to do this sort of A/n hacking.

            That leaves the question of why the transforms are unitary in the first place, of course. While the general question of which norm to preserve is, I believe, yet unanswered, Scott Aaronson shows in his Quantum Computing Since Democritus lectures that if you have to preserve some norm of the form p^n, and want any nontrivial interactions at all, n can only be either 1 (giving you normal probability theory) or 2 (giving quantum mechanics).

          • Harry Maurice Johnston says:

            @pseudonymous, yes, but you’re assuming the Born rule, whereas I’m (metaphorically) trying to derive it.

            And I’m not sure what you mean by “system of collapse most consistent with linearity” – sounds paradoxical, since collapse is inherently nonlinear. 🙂

      • TheAncientGeeksTAG says:

        . (Sadly I have no pragmatic ideas of how one might go about building such a theory. I’m vaguely hopeful that quantum computing will eventually give us some experimental evidence to guide us in the right direction, though this is probably just wishful thinking.)

        Here’s the best known approach.

        https://en.wikipedia.org/wiki/Ghirardi%E2%80%93Rimini%E2%80%93Weber_theory

    • TheAncientGeeksTAG says:

      In many worlds, you have wavefunction evolution plus Born’s rule.

      In particular, you can’t get any practical predictions without using Born’s rule, so the math you need is identical. That’s one sense in which MWI isn’t clearly more simple.

    • sovietKaleEatYou says:

      That is indeed a problem (“what basis do you choose”/”how do you split”). It is solved in scattering theory by noting that the picture in the far past and the far future is classical (in the limit), and this classical theory provides a basis. In our universe one could make the argument that in the limit as the universe expands, particles end up far apart and the result is effectively a scattering experiment (so the “nonlinearity” is provided, in a sense, by expansion of the universe). This is of course very speculative and probably cannot be made complete without a better theory of physics.

  59. blacktrance says:

    The problem with the first two examples is that absence of evidence is evidence of absence. There’s evidence for acids and bases having something to do with salt and water, but there’s no evidence for any effects on supernovas, so there’s no reason to include that in the theory. There’s not much evidence of the Devil having causal effects on the world in general, so that counts against devil-theories – we can reject them without positing an alternative (though it helps to have one).
    But our choice of interpretation of quantum physics is more like the case for many-dinosaurs than the case against devils. Given the apparent uniformity of physical laws and other empirical evidence, we have reasons to think that there were many dinosaurs millions of years ago – it’s an implication of a simple theory that’s compatible with the evidence. The correct quantum interpretation must be similar.

    • thetitaniumdragon says:

      The Copenhagen interpretation is that reality is statistical until observed.

      This is demonstrable via the double slit experiment. Why would we doubt this is the case?

      That’s a much simpler explanation than multi-worlds – observation results in quantum collapse is simpler than “There’s an infinite number of parallel universes with all possible outcomes.” It requires you to posit a vastly higher level of complexity for no gain, and worse, it requires you to make excuses for things like conservation of matter-energy and why they don’t apply to parallel universes. The Copenhagen interpretation doesn’t require that kind of handwaving.

      • Harry Maurice Johnston says:

        Conservation of matter-energy really isn’t a problem, it only sounds like one if you listen to the popular explanations of many-worlds rather than looking at the actual maths. No need for excuses or hand-waving, the maths works just fine.

      • Ketil says:

        This is demonstrable via the double slit experiment. Why would we doubt this is the case?

        That’s a much simpler explanation than multi-worlds

        I have to admit I don’t get this. How does multi-worlds resolve the paradox?

        If the universe is split on a quantum event (one slit or the other), then surely I am split with it, and this-universe me will observe the particle passing through one slit, while that-universe me will observe it passing through the other.

        For the experiment to make sense, the universe that I end up in must have the particle being in both universes simultaneously. In other words, the multiverse can’t merely be forking horizontally (in time), it must also be forking vertically.

        • Thegnskald says:

          Hrm. Stop thinking in terms of the universe forking.

          You exist in all compatible universes simultaneously. It is only when two universes stop being compatible that there is any forking at all, and it isn’t the universe forking, it is your experience of the universe. It is your position that is being constrained.

          More, over time the set of compatible universes increases again; this is entropy. The state of highest entropy is the state of unity; all possible universes coexist in that state.

        • Robert Jones says:

          In MWI, the universe is not “splitting” on quantum events but on measurement events. You definitely need the electron to be passing through both slits in the same universe!

          • viVI_IViv says:

            In MWI, the universe is not “splitting” on quantum events but on measurement events. You definitely need the electron to be passing through both slits in the same universe!

            But then you have to define “measurement events” as something different than business-as-usual Schrödinger evolution, which means that you have the same complexity of collapse interpretations: you’ve just replaced “collapse” with “split” as an onthologic fundamental phenomenon.

            What MWI proponents do in practice is argue that the splits are actually “soft”, they macroscopically emerge from thermodynamical decoherence, but they can’t really explain how this works and how it gives raise to indexical probabilities consistent with Born rule. This makes MWI incomplete.

          • Robert Jones says:

            Fundamentally I agree with you and think this is a problem with MWI (which to be clear doesn’t mean that I’m rejecting MWI outright, just that I think it has more problems than its advocates acknowledge).

            That said, Everett definitely does think that the splitting occurs precisely on measurement events, which he defines in a certain way, in effect as a sub-class of business-as-usual interactions having particular characteristics. He then claims to show that this does give rise to probabilities consistent with the Born rule. He isn’t saying anything about decoherence or thermodynamics. I linked to his thesis in another comment.

          • lightvector says:

            In MWI, the universe is not “splitting” on quantum events but on measurement events. You definitely need the electron to be passing through both slits in the same universe!

            No, you don’t. Here’s what you might be misunderstanding:

            The electron CAN be thought of as passing through a different slit in two separate “universes”. It’s just those two universes differ solely in the state of the electron and NOTHING else. And that universes may recombine – they do so whenever they exactly match again. So when the electron then hits the back panel, if it does so in a way that leaves precisely no trace of which slit it “really” went through, those two universes states end up “leading back” to *exactly* the same universe state. And this is when you get interference. Whenever two different states proceed to contribute amplitude to a identical matching state, their amplitudes combine back into that single state (and they constructively or destructively interfere based on their phase).

            That said, Everett definitely does think that the splitting occurs precisely on measurement events, which he defines in a certain way, in effect as a sub-class of business-as-usual interactions having particular characteristics.

            If you dig in to the math, then I think it should be clear that this sort of is a misunderstanding of Everett. When he says “measurement event” or “observation”, as far as I can tell he just means “literally anything that could cause any particle to end up in different state as a result of a difference in another particle”.

            If an electron is in a superposition of two states, then even before it interacts with anything else, you can think of the universe as *already* split into two universes, each with a copy of you. But the only difference is the state of that electron.

            When you read phrases in the thesis like:
            As soon as the observation is performed, the composite state is split into a superposition for which each element describes a different object-system state …
            the way you should read it is that not that the observation “causes” a split to happen. The electron is *already* split in a superposition. Rather, the act of observation by something else (i.e. a different particle bumps into the electron) is what causes the split to now ALSO be different in the state of that something else, whereas formerly it only differed in the state of the electron alone. And if two states differ in *enough* particles such that for all practical purposes they will never again recombine, then informally one might say that decoherence as “occurred”, whereas if they differ in only a few particles, and evolve in such a way such that those universe states (or at least, some of their further branches) do recombine into the same states, then we will see interference.

            Hopefully that makes sense?

            It is definitely possible to have objections to MWI, but the legitimate objections are a lot more subtle than any of the above, which are just basic misunderstandings rather than flaws in the theory.

          • lightvector says:

            In MWI, the universe is not “splitting” on quantum events but on measurement events. You definitely need the electron to be passing through both slits in the same universe!

            No, you don’t.

            You CAN think of the electron as passing through each different slit in a different universe. It’s just that universes may recombine, and that’s when you get interference. When the electron hits the screen in the back, if it does so in a way that the two universes exactly end up in the same universe state again, then the amplitudes for each of the two universes will combine, and you get constructive or destructive intereference depending on their phase.

            That said, Everett definitely does think that the splitting occurs precisely on measurement events, which he defines in a certain way, in effect as a sub-class of business-as-usual interactions having particular characteristics.

            This is sort of a misundersanding. As far as I can tell, by “measurement” or “observation” Everett means “literally anything that can cause one particle’s state to differ based on the state of another particle”. And measurements are NOT what cause splits to happen, if you are thinking of them in the way that I think you are. If an electron is in a superposition of two states, then you may consider the universe to be *already* split into two, differing only in the state of that one electron.

            As soon as the observation is performed, the composite state is split into
            a superposition for which each element describes a different object-system
            state…

            For example, the way you should read phrases like the above (quoted from the thesis) is not that the observation “causes” the split. You should think of the universe as ALREADY split, differing in that one electron. Rather, the observation (e.g. another particle bumps the electron) is what causes the split to now differ in *another* thing besides that one electron.

            If the two branches then proceed to actually evolve back to the exact same state in the future (or some of their branches do), then you get interference since those two universe’s amplitudes will now recombine. If they proceed to diverge further, differing in more and more particles until for all practical purposes the chance of them recombining ever again and thereby exhibiting interesting quantum effects is almost zero, then informally the word we use to describe that is “decoherence”. But in some sense, it’s not the decoherence that caused the fork, that was there the moment there was a superposition and then two sides of the superposition started differing more and more rather than recombining.

            Hope that helps! There *are* legitimate questions/objections for MWI, but most of the things brought in this thread appear to be closer to simple misunderstandings rather than actual problems for the interpretation. The actual objections are a little more subtle than this, and it helps a lot to understand how the math works.

          • Harry Maurice Johnston says:

            @lightvector, I think that at that point you are defining “universe” in such a counter-intuitive way (to mean the same thing as “point in configuration space” or “state vector”) that I’m not sure it is either useful or what most people mean when talking about MWI.

            I mean, if it works for you, that’s fine. And I could be wrong about what “most people mean”, it’s not like a took a survey! But I personally find your choice of words surprising.

            [Edit: I’m going to back off a bit on this one, after seeing the quote near the bottom of this comment which certainly sounds like it is using the word “universe” in the same way you are. I won’t delete this comment though because the fact that there is confusion about the word is in itself important.]

          • lightvector says:

            @Harry Maurice Johnston
            Sorry, yeah. I think there’s a common misconception that MWI says that there is some special subset of events that cause “splitting” and such that e.g. on each such event, suddenly and discretely two full copies of everything are created where only one existed before. If you’re thinking of the “many worlds” as being fundamentally different than the “multiple possibilities of a system in superposition”, then I feel like you’ve already missed the point. Maybe for some people the way to get that across is that to the degree they insist on thinking of literal separate ‘universes’ now being born as a result of some decoherence, they should be thinking of those ‘universes’ as having existed already even in just an isolated superposition even before that system became entangled with everything else. Not sure on that though – maybe this is more confusing rather than less.

            (I also recognize some of the inherent difficulties of speaking way too, e.g. choice of basis)

  60. Bugmaster says:

    It’s too glib to say “There is no difference between theories that produce identical predictions”. You actually care a lot about which of two theories that produce identical predictions is considered true.

    True, but if I don’t have to say “…and also a star goes supernova in a totally undetectable way” every time I talk about chemistry, I’m just going to skip saying it. Otherwise, it’s just too much effort. That’s what Occam’s Razor is all about.

    “Choose the simpler of two theories that make identical predictions” isn’t trivial. You actually have to understand some philosophy in order to figure out which of two theories is simpler.

    Depends on what you mean by “philosophy”. You can spend a lot of time debating whether truth is beauty or whatever; or you could just say, “hey, wait a minute, believing in the Devil doesn’t help you predict anything, so the Devil isn’t an explanation, it’s just another word for ignorance”. The same applies to God, the Simulators, or any other entity that can account for literally any possible observation. You are free to believe in all kinds of devils if you want, but if you want to make accurate predictions, you need to also believe in something testable.

    Applying the two previous morals consistently lets you prefer the many worlds interpretation of quantum mechanics without having to worry about this being “untestable”.

    Well yeah, and believing in devils also allows you to do all kinds of cool stuff… except for two things: making accurate predictions, and omitting useless clauses whenever you talk about your models of the world.

    That brings us back to the beginning, though: if I say, “acid + base = water + salt + undetectable supernova”, and you say “acid + base = water + salt + an undetectable devil gets his horns”, neither one of us are justifiably wrong. We can each believe in our own mythology, if that’s what makes us feel better. Are an infinite number of worlds “simpler” than wavefunction collapse ? Maybe, maybe not, but it’s a question for philosophers, and not for anyone who is actually doing anything useful.

    • Scott Alexander says:

      If you were actually in Example 1, wouldn’t you be annoyed that the supernova-version was being taught and never challenged? Wouldn’t it seem important to you that the default explanation be the supernovaless one?

      • Bugmaster says:

        Well, on the one hand, if I were in Example 1, I’d probably be indoctrinated into supernovaism, as well, so I would think it were normal. But if I, as I exist today, were in Example 1, then I’d probably find it mildly irritating, sure.

        But my point is that we are not in Example 1. We are in Example 4, where you say, “a + b = c + magic”, and I say, “a + b = c + sorcery”. As knzhou points out in his comment above, we can’t just dispense with the magic and/or sorcery, because we don’t have a complete understanding of how a + b actually works; if we did, we’d be able to devise a test to determine which model is more likely to be correct. Until we do, we are free to use whichever model is most convenient for the problem we are trying to solve at the time; we are even free to switch between models at will (as knzhou does). I am no more justified in saying “sorcery is true and magic is obviously false” than you are in saying, “magic is true and anyone who believes in sorcery is a fool”.

      • sclmlw says:

        If I were in example one above, I would want to understand how the supernova hypothesis made its way into the model in the first place. Was it a path-dependent addition used to justify getting out of more unsubstantiated hypotheses, or was it driven by evidence we don’t have in our current universe? If it’s the former, I could simply apply the real Occam’s razor and state, “When choosing between competing hypotheses, we should prefer the one that requires us to make the fewest assumptions. You’ve made an extra assumption here, but I have a model that makes fewer assumptions. Either justify your additional assumption via observation, or join me in preferring the supernovaless one.”

      • Ketil says:

        I would be put off by the contradiction in terms of a cause affecting something beyond the event horizon, the lack of any proposed mechanism connecting me mixing chemicals to a completely unrelated and very distant fusion reaction, and — most of all — by the apparent need to censor certain lines of inquiry.

        But I don’t think any of this applies to QM, for all its weirdness and esotericity.

      • Bunthut says:

        It does seem important, but could this not be for reasons other than truth? Truth doesn’t have to be identical with our norms about what may be believed, though the latter is obviously based on the former. After all, you presumably think its fine to teach highschool children the Bohr model of the atom. And we know that’s not true!

        When I hear about the supernova scenario, my first guess is that the reason these people care so much about the supernova is deceptive intent. Because while you can spin up a theory with a supernova that doesn’t imply any different observations, that is not how the claim will be intuitively understood. I don’t know the details here, since the example is fictional, but wrt paleontology, the devil in that story is very much supposed to have implications for prediction. It should make us predict the experience of hellfire if we dont follow certain rules, for example. So we might want to insist on a model being used that isn’t easily misinterpreted.

        There are all sorts of reasons why I might prefer one of two models with identical predictions. Simplicity does play an obvious role: Simpler models are generally easier to use and remember. In the absence of other considerations, this will usually dominate. But sometimes we have a few theories of similar complexity, and we keep them around because they lend themselves to different problems. And sometimes one is easier to use without being simpler: I’d be hard-pressed whether the Newtonian or the Lagrangian formalism is simpler, but almost all of us use Newtonianism almost all the time, because it is so similar to our intuitive reasoning about physics, thereby allowing easy translation.

      • viVI_IViv says:

        If you were actually in Example 1, wouldn’t you be annoyed that the supernova-version was being taught and never challenged?

        It’s not like the fact that multiple interpretations of quantum mechanics exist is a taboo topic that physicists never discuss nor teach. It might be be mentioned in passing in a quantum mechanics course, it’s not given an in-depth discussion mainly because it’s more of interest to philosophers of science than practicing physicists.

        To make things concrete, how would you teach quantum mechanics?

        Say you are discussing entanglement: there are two qubits in the entangled state |00> + |01> + |10> – |11>, you measure the first qubit in the 0,1 basis, what happen next?

        The usual answer is that with 0.5 probability the state (wavefunction) of the system becomes |0> (|0> + |1>), and with the other 0.5 probability it becomes |1> (|0> – |1>) . “becomes” here is what it’s meant by wavefunction collapse.

        ———————-

        Now let’s try to reformulate this in the language of MWI: we have to introduce another variable for the observer state, so the total state at the beginning is

        |I’ve observed nothing> (|00> + |01> + |10> – |11>),

        the measurement now is an unitary operation between the observer state and first qubit defined as
        |I’ve observed nothing> |0> -> |I’ve observed 0> |0>
        |I’ve observed nothing> |1> -> |I’ve observed 1> |1>

        the state right after this operation becomes:

        |I’ve observed 0> |00> + |I’ve observed 0> |01> + |I’ve observed 1> |10> – |I’ve observed 1> |11>
        where “becomes” here means Schrödinger equation rather than collapse.

        But wait, the attentive student will ask: how come that the observer states never seem to interfere the same way quantum states do? If e.g. you decide to go through the left door if you observed 0, or through the right door if you observed 1, why doesn’t the distribution of your final position show an interference figure like a photon does in the double slit experiment?

        You answer: because something something decoherence (perhaps, we haven’t figured it out yet) it is as if the world has split into independent branches that don’t interfere with each other, and behave exactly as with 0.5 probability the state of the system became |0> (|0> + |1>), and with the other 0.5 probability it became |1> (|0> – |1>), with the caveat that these probabilities represent indexical uncertainty.

        Which explanation do you think is simpler and more “elegant”?

        • lightvector says:

          You answer: because something something decoherence (perhaps, we haven’t figured it out yet) it is as if the world has split into independent branches that don’t interfere with each other, and behave exactly as with 0.5 probability the state of the system became …

          This is wrong/misleading, because it suggests that MWI throws up its hands at how or why “decoherence” happens. One of the appealing parts of MWI is precisely that it does NOT throw up its hands at this (unlike collapse theories, which do throw up their hands and don’t explain how or why it happens). MWI observes that the math that all quantum theories all already agree on at the small scales explains exactly when interference will or won’t happen.

          Inteference between “branches” of a superposition happens if and only if those branches proceed to contribute amplitude to indistinguishable universe states. When this happens, exactly as the mathematics of QM says for all small scale situations, the amplitudes will add up and therefore positively reinforce, or negatively cancel, depending on their relative phase).

          The photon with two slits interferes with itself because once it hits the final screen, the “state of the universe” is such that it is completely indistinguishable whether it “actually” went through the left slit or the right slit. So the amplitudes combine into the same states (a different state for each final position on the screen) and produce interference.

          The human with two doors does NOT interfere with themselves because the state of the universe is NOT exactly the same depending on which has happened. The neurons in the human’s brain will be different (remembering left door vs right door), the positions of their body and muscles will all be different, the air currents in the room will be different, the vibrations that their footsteps sent out will be different, etc etc, causing trillions upon trillions of differences.

          So the second explanation is indeed more elegant because it simply uses the existing math that all QM interpretations already agree on to explain a phenomenon that the first explanation does not provide any mechanism for (when “collapse” will or won’t happen, when interference will or won’t happen).

          • Daniel says:

            Awesomely clear explanation! Do you have some blog or something that I could follow?

          • viVI_IViv says:

            This is a good explanation, thanks.

            But still, it requires the einselection assumption in order to have a priviledged “classical” basis. Einselection has been criticized as requiring circular assumptions.

            Moreover, the existence of high-temperature macroscopic quantum effects, such as high-temperature superconductivity, seem to imply that mere thermal interaction with the environment isn’t sufficient to destroy quantum coherence.

      • sclmlw says:

        +1 to this.

        Untestable hypotheses are not science, they’re philosophy. We treat them differently, because for every right hypothesis there are a hundred+ correct-sounding wrong hypotheses we could come up with. Anyone who has done bench work in a lab understands this, I think. The intuition we should build is to not have confidence in a hypothesis – no matter how good it sounds – until it has been tested.

    • Bugmaster says:

      Thanks, I always try but I so very rarely succeed 🙂

  61. len says:

    Example I is insufficiently differentiated from example II, both are the addition of unnecessary parameters to the function that can generate the prediction.

    There aren’t actually many models that differ only on truth value without an (easily comparable) measure of simplicity. If you want to make a clearer example without involving a complexity penalties, consider perhaps using mechanism of actions as an example? Except well, here knowing how a drug works isn’t as important as knowing that it works at all (and if there’s any side effects).

    On another note: taking into account simulation theory, man-worlds is impossible because you’ll need the computing system to have infinite memory, whereas waveform collapse is much more resource-efficient.

    • knzhou says:

      In physics it’s more ambiguous than you think: even mechanisms aren’t necessarily good. Consider two theories of gravity:

      1. “A pulls B because it exerts a gravitational force”
      2. “A influences the gravitational field, whose effect then propagates to B, which then feels a gravitational force”

      In Newton’s day, #2 had absolutely no benefit over #1, despite providing a “mechanism”, because the gravitational field plays absolutely no role besides acting as an inert middleman. It’s just #1 with extra steps. (And for any number of extra steps, you can always demand another. After all, by what mechanism does A influence the gravitational field? Should yet another field intervene?)

      So Newton famously went with #1 and shut the door with “hypotheses non fingo”. Centuries later, the gravitational field become a useful concept, but only with the passage of time and the maturation of the rest of physics. Insisting on mechanism from the start would have slowed progress to this point.

    • Joseph Greenwood says:

      I mean, we could still be simulated on a quantum computer right?

  62. encharitimone says:

    “You actually have to understand some philosophy in order to figure out which of two theories is simpler”

    I’m not sure if understanding philosophy is necessary for making this determination, but it definitely isn’t sufficient.

    Also: in a lot of cases (e.g. many-worlds/collapsing), without proposed mechanics, and other vital details, I don’t think there’s sufficient basis for assessing the net complexity of either proposal. For example, if you start asking questions about matter/energy conservation, many worlds looks peculiar. But if it’s fine to wave that away, then why can’t we wave away “the waveform collapses for reasons”?

    • knzhou says:

      Well, energy is conserved in many worlds just fine. Over time the branches have smaller amplitudes, so the growing number of branches is cancelled out by the shrinking amplitudes to give a constant energy expectation value. The only reason people get confused about this is because popsci talks about the branch splitting as “new universes being born” and people imagine violent mini-Big Bangs or whatever. In reality, the evolution is perfectly smooth and not qualitatively different, from an energy conservation point of view, than any undergraduate textbook problems.

      Even if energy weren’t conserved in many worlds, it wouldn’t be a knockdown blow. Energy isn’t conserved in general relativity, and that’s no problem. You don’t need a “mechanism” to explain why, either. It’s just that the usual proof of energy conservation fails. Principles valid in special cases can and often do fail in general cases.

      • Lambert says:

        Energy isn’t conserved in general relativity

        So there are time asymmetries?

        • chrisminor0008 says:

          Yes. The universe is getting bigger.

          • benf says:

            Indeed. It’s pretty amazing that people failed to notice for so long that the past was expanding. With the unification of space and time it should have been totally obvious from the start that the space would be expanding as well. The fact that this required observation to realize is one of the greatest failures of theoretical physics.

          • doubleunplussed says:

            @benf the past is expanding?

            That’s not a coherent statement. To describe something as expanding presupposes time exists already. The past is *big*, but in what sense it is expanding? It can’t be bigger at one moment in time than another, because only a single instant of it exists at any given moment *.

            In that there’s more of it as we move forward in time? That’s no more meaningful than saying the right side of my bedroom expands when I take a step to the left.

            * Not quite true since relativity, only a slice exists at each moment, and which slice depends on your velocity – and the slices are indeed getting bigger, but only spatially…

          • benf says:

            @doubleunplussed The past contains more time today than it did yesterday. Tomorrow the past will be one day larger. This is identical with the statement “time passes”. It’s an unconventional vocabulary but it makes perfect sense and actually describes the flow of time accurately.

            You seem to be accepting the block universe model, in which the future exists just as much as the past and only our point of view changes as we move through time. I happen to think this view forces us into unnecessarily contorted understandings of the space-time unity that Einstein demonstrated, and also wreaks havoc with our intuitions of time, which I think ought to be taken more seriously than many physicists do.

          • chrisminor0008 says:

            @benf – You’re assuming time started at the Big Bang? That’s one hypothesis among many. Also, a Big Crunch in our future was a hypothesis that was only recently ruled out. There’s no real reason why a universe with a finite history can’t be contracting.

          • benf says:

            @chrisminor0008 I don’t see that I’m assuming time STARTED at the big bang. I’m in favor the the big break idea that the expansion of the universe actually represents not a spontaneous explosion in energy but actually a downward fluctuation in some more fundamentally energetic process. But I think time and space are identical. Spacetime itself seems to be some kind of phase shift, as a state drops towards its lower energy. Or something like that, I’m not a freaking quantum scientist or anything but I learn as widely and as deeply as I can and that’s the best I can make of it.

      • encharitimone says:

        Fascinating! I’ve had classes in quantum mechanics, we talked about many-worlds, and somehow that didn’t come up. The internet: still teaching me more than college.

        • eric23 says:

          The internet teaches you scattered random things. A lot in some areas and none in others.

          College teaches you a relatively uniform basis in many areas.

    • entropy says:

      It’s the other way around. MWI conserves energy: in the basis of eigenstates of the energy operator, the magnitudes are constant. Wave collapse, on the other hand, can changes these amplitudes thus violating conservation of energy.

  63. Joy says:

    Then one day you come to a land of Sir’n, where thousands of people say a daily prayer to the largest donut in the world.

    Oh the horizontal tower of truth and beauty!
    We toil united before you, day and night
    By the glory of Higgs, give us a sign of the one unified theory
    Of supersymmetry hiding in the space between particles
    So we could smite the unbelievers
    Trying to prevent us from building an even bigger donut.

    You look around and see Sabine Hossenfelder, dressed as Cassandra, hoarse from singing her lungs off about groupthink and blinding yet fruitless beauty of naturalness.

    Moral of the story: Where money and livelihood are involved, morals and reason take the back seat.

    • knzhou says:

      It’s not really possible to respond to this in a rational way, but I’ll just point out that if your main source of physics news is Sabine, you’re not getting a remotely balanced story.

    • Alkatyn says:

      Tbh the ability to create novel particles is better proof for divinity than most of the other options available

  64. knzhou says:

    While I and II are correct, the moral of III does not follow. As a practicing physicist, I care about interpretations to the degree that they let me understand the particular problem I’m thinking about at the moment — and for many problems, many worlds is *not* the best option, despite how much I like it. Planting a flag on a particular interpretation and saying that it’s unambiguously the “best” or “simplest” one is a luxury only philosophers can afford.

    Incidentally, I and II bear a suspicious resemblance to Goodman’s new riddle of induction. I remember being totally obsessed when I first heard about it. Its setup is a bit contrived, from a scientific point of view, but like your examples, it’s a good testing ground for thinking about interpretational issues.

    • tmk18 says:

      I mean no-one’s stopping you from thinking in terms of Copenhagen if it makes a particular problem easier to understand (although I would like to hear what kind of problem that would be), but that’s different from the question of which interpretation more accurately describes how the universe actually works. Like how usually it’s perfectly fine to use Newton for everyday mechanics even though Einstein is much closer to actual reality.

      Unless I’m misunderstanding you. Are you maybe denying that there is something like an objective reality?

      • Bugmaster says:

        I’m not knzhou, but here’s the way I think about it:

        Let’s assume that, at this very moment, there exist a certain number of pebbles on the surface of the exoplanet LHS 3844b. That number is either odd, or even. Alex says that the number is odd. Bob says that the number is even. Cindy says that, until someone figures out a way to land a probe on LHS 3844b, neither Alex nor Bob are justifiably correct, and we’re free to use whichever pebble parity makes our other calculations easier. Is Cindy “denying that there is something like an objective reality” ?

        • Long Disc says:

          Bugmaster, actually, Cindy is not quite denying reality but is behaving as if she does deny it. Alex and Bob are each making an assumption without much basis, clearly stating the assumption and then proceeding with their calculations. Cindy is not making an assumption but just searches for the keys under the streetlight.

          Suppose that the calculations under Alex’ and Bob assumptions are both feasible but yield different answers, and this answer is important for Dylan. If Dylan talks to both Alex and Bob, he will learn about the uncertainty and will be able to make his decision accordingly. If Dylan talk only to Cindy, he will be in a much worse position.

          • Bugmaster says:

            Suppose that the calculations under Alex’ and Bob assumptions are both feasible but yield different answers…

            This is explicitly not the case in the scenario Scott is discussing. If you have two different assumptions that make two different predictions, then the obvious next step is to collect some evidence to test which prediction is correct. You’ve stated that the predictions “are important”, which, by definition, implies that evidence can be collected (at least, in principle).

      • thetitaniumdragon says:

        Why would we assume that there are an infinite number of branching parallel universes?

        There’s no reason to believe that interpretation whatsoever.

        It’s not only untestable, it’s nonsensical. Its Flying Spaghetti Monster levels of nonsensical. “Oh yes, the reason why waveform collapse happens is because there’s an infinite number of parallel universes that are selected from every time this happens which can never be observed.”

        The idea that things don’t actually have definite properties until observed is entirely consistent with observation (including things like the double slit experiment, which makes perfect sense under the Copenhagen interpretation – of course we observe what we observe, reality is probabilistic in nature) and makes perfect sense, it’s just weird and counter-intuitive. But being counter-intuitive isn’t the same thing as wrong; in fact, a lot of QM is weird and counter-intuitive.

        Infinite parallel universes is basically exactly like the distant supernova thing. In fact, it’s exactly the same sort of error.

        • Harry Maurice Johnston says:

          Well … I wouldn’t say no reason. If you take away the collapse postulate, the maths unambiguously predicts those branching “universes” and really the only problem is that you still have to add the Born rule back in somehow. (I happen to think that’s a fairly big problem, but YMMV.)

          To put that another way, given that the math predicts an infinite number of branching universes, is the fact that this is counter-intuitive a good reason to introduce the collapse postulate just to remove them from the theory?

          (And then there’s the other problem with Copenhagen, the fact that if taken seriously it blatantly violates the principle of relativity. This happens in a way that, it so happens, can’t ever actually be observed, but it is still rather worrying.)

          • Robert Jones says:

            I’m just repeating my other comment, but the maths doesn’t have any branching. The maths gives you a wavefunction evolving in an entirely deterministic way. To understand it as a superposition of different possibilities, you have to do something like harmonic analysis, but when you do that, you bring your own eigenbasis. The maths doesn’t give you an eigenbasis.

          • Harry Maurice Johnston says:

            My understanding is that when you model the process of doing a measurement the wavefunction naturally splits into two or more incoherent parts corresponding to the different branches. I suppose one could argue that you’re sneaking in an eigenbasis of your own choice as part of the model, but I’m not sure that’s a big problem provided the choice is physically realistic.

            I think a thermodynamic bath ought typically to be involved, and I guess that introduces some sort of arbitrariness in the eigenbasis – but from a physical perspective I suspect that would be along the lines of “let’s choose an instrument that is stationary with regards to the Earth, even though that is an arbitrary choice”.

            (In case it isn’t obvious, I am somewhat out of my depth here.)

          • TheAncientGeeksTAG says:

            @RobertJones

            I’m just repeating my other comment, but the maths doesn’t have any branching. The maths gives you a wavefunction evolving in an entirely deterministic way. To understand it as a superposition of different possibilities, you have to do something like harmonic analysis, but when you do that, you bring your own eigenbasis. The maths doesn’t give you an eigenbasis.

            Right.

            There is an approach to MWI based on coherent superposiitions, and and a version based on decoherence. These are incompatible opposites.

            As you mention, coherence-based MWI has problems that it’s worlds aren’t objective, since they depend on choice of eigenbais, arent’ causally isolated, and aren’t, as far as we know, very large.

            One would naturally tend to conceptualise a world as being about the size of the observable universe. But experimentally, complex coherent systems are difficult to maintain, and require extreme conditions, such as cooling to near absolute zero. The whole difficulty of quantum computing is the difficulty of maintaining superpoisitons of a few dozen or hundred particles,

            Large-world decoherence based MWI has the advantage that the entities
            it is dealing with are a much better fit for what would intuitively count as a “world”, in that they are potentially universe-sized, and causally isolated from one another. But it has the disadvantage that the criterion for when worlds split, and how worlds split is no longer obvious. Small-world, decoherence based MWI is “just” the evolution of the SWE. Large-world MWI is not: it has to assume something about the nature of the universe that causes the coherent states described by the SWE to decohere. That is, it requires some additional structure. Small world MWI wins on simplicity, and loses on failing to describe anything that could plausibly be called a world. Large world MWI wins on describing intuitive worlds, and doesn’t clearly win on simplicity.

          • Alsadius says:

            Regarding the Born rule, there’s a rather intuitively appealing MWI explanation I came up with some years ago, but that got criticized for missing the point. (I took a quantum class in university, but I’m no expert). I’m almost certain the below is wrong, but I’m curious if anyone can give me an explanation of why.

            The probability of observing a particle at location L is proportional to the square of the wavefunction at L. So the natural reason for it to be squared is that the observer and observed need to be in the same universe. P(particl in universe where particle’s at L)*P(observer in universe where particle’s at L) = wavefunction squared.

            Now, tell me how hilariously wrong I am.

          • Robert Jones says:

            @TheAncientGeeksTAG: That’s a helpful clarification of the distinction, thanks.

        • uau says:

          Why would we assume that there are an infinite number of branching parallel universes?

          Because we can see the influence of the other universes on a small scale. The doubleslit experiment for example shows versions of a particle that took different paths interfering with each other.

          Infinite parallel universes is basically exactly like the distant supernova thing. In fact, it’s exactly the same sort of error.

          No, it’s just assuming that the wavefunction space continues beyond the small scale which is directly observable via quantum interference. Saying that you’re sure parallel universes don’t exist is more like saying that there exists no matter or even no empty space beyond the area that you can directly see with a telescope.

          • TheAncientGeeksTAG says:

            Because we can see the influence of the other universes on a small scale

            If they are continuing to interact, are they separate universes?

          • jermo sapiens says:

            Because we can see the influence of the other universes on a small scale. The doubleslit experiment for example shows versions of a particle that took different paths interfering with each other.

            I dont believe this is the position that MWI proponents take, but maybe I’m wrong. When the wavefunction interferes with itself it has not collapsed yet and therefore there is no universe-split either. In my understanding the different universes occur when the wave function collapses, not before.

          • lightvector says:

            @jermo sapiens:
            Not quite. “Universe splits” happen all the time under MWI even at the small scale, EXACTLY the way they would at the large scale. There’s exactly *nothing* special between small and large. Attempt at explanation: suppose we observe the following model of the world to make highly accurate predictions:

            * Interactions between particles and/or the passage of time result in a thing we dub “amplitude” being sent into multiple possible future states of the universe, differing only in those particles initially (until those particles in-turn affect other particles and cause them to differ, and so on). The amount of amplitude sent is 100% deterministic based on the kind of interaction. This is the “branching” of the universe and it happens all the time everywhere.

            * Some future universe states can be reached by multiple possible paths. E.g. branch 1 followed by branch 3 leads to an identical universe state as branch 2 followed by branch 4. Then, the amplitudes simply sum up. But since amplitudes can be negative/complex numbers, this may result in them cancelling out! So weirdly supposing state A can lead to C, then if state B can also lead to C, B might do so in a way that cancels out A and makes C *less* likely (end up with *less* amplitude).This is called “interference”, and it is how we know that such branching happens and is necessary, at least mathematically, to model the physics.

            * All quantum interpretations AGREE up to this point on the mathematics of such “branching”, even if not on whether this branching is “real”. If you want to postulate that it is not “real”, and you aren’t content with just being agnostic, then it is difficult to come up with “good” alternative mechanisms – QM is a surprisingly rigid mathematical theory and leaves you with fairly little wiggle room.

            Now where MWI picks up is:
            * In experiments, there seems to be no limit to how much two branches can diverge before coming back to interfere. Two branches can diverge with a thousand particles all different, but if you’ve managed to carefully engineer it so that there is nontrivial chance (“amplitude”) for those two branches to then evolve in the future so as to *exactly* match up again on literally every single particle, then you will observe interference. There is nothing in the math to suggest that this will stop at a thousand particles, or even a million, or a trillion, rather than continuing forever.

            * But in practice, unless you’ve engineered it perfectly, once you have millions of particles all differing, the chance for two branches to exactly meet up again is nil. So once two branches differ enough (for example, a human *looks* at the particle, and now billions of neurons in the human’s brain are all firing differently depending on which branch) we might casually say those are now two independent worlds because never again in practice will they exactly match up again and interfere. Mind you, MWI does NOT say that there are “two” worlds at this point. The number of worlds is ill-specified because *all the time* every different particle interaction is branching with some of those branches recombining and some of them diverging further, and they do so in a more continuous and smooth way than the purely discrete “branches” suggests (this is why people speak of the “wave function”, which better evokes the continuity that is actually present in the math). The notion of there now being “two” worlds is purely a human notion. And theoretically those “two” worlds could still interfere with each other, very very very very very very very slightly (to the degree that each one does send a very very very very very very tiny amount of amplitude to states that match up exactly again).

            So under this perspective, MWI is attempting to say that the exact behavior we’ve seen at small scales – branching and recombining with interference (except often more “continuous” and “wave-like” than purely discrete branches), simply continues up to large scales exactly the same.

            (Note: I lean mildly towards MWI, but I also understand the legitimate objections against it. Many of the objections in various comments seem to just have plain mathematical or theoretical misunderstandings, but Harry Maurice Johnston’s observations in this thread about objections have been very good).

            Does that help at all?

          • jermo sapiens says:

            @lightvector:

            Thank you for this. It helps a little bit, but then I’m wondering what it is that differentiates MWI from Copenhagen. If there are not multiple worlds, and there is not another me in a different universe doing cocaine with hookers instead of commenting on SSC, what you described seems compatible with what (I think) I understand about the Copenhagen interpretation.

            So under this perspective, MWI is attempting to say that the exact behavior we’ve seen at small scales – branching and recombining with interference (except often more “continuous” and “wave-like” than purely discrete branches), simply continues up to large scales exactly the same.

            Yet at large scales you get the probability of two branches lining up perfectly in the future to be basically nil.

          • lightvector says:

            @jermo sapiens

            Yep, the point is to say that MWI doesn’t have a separate postulate about when two worlds actually become “fully separate”, or about how many “separate” worlds there ever are. Because theoretically they never are exactly independent. It’s just the same objective rule all the way up – you get interference if and only you contribute amplitude to states of the universe where *everything* matches up.

            I want to say there are sort of at least two versions of Copenhagen (probably more). I don’t know the proper terminology for distinguishing these, but it’s something like:

            1. “Hard-collapse-ish” – Says that superpositions (i.e. branching) happens at small scales, but there is actually some (currently unknown) mechanism that literally causes collapse and forces the universe to “pick” one branch, and only that one branch then goes on to “exist” thereafter. Collapse does not necessarily have anything to do with “observers” or “humans”, but it *does* reliably happen somewhere between nano-scale physics and human-scale physics. These versions of Copenhagen can actually be falsifiable from softer versions and/or MWI – e.g. if you postulate that collapse happens any time you get to a superposition with more than 1 million particles… well you could actually try to test that some day with good enough technology! (Personally, I predict we will not see such a thing, based on fuzzy heuristics of mathematical elegance).

            2. “Soft agnosticish” – To quote viVI_IViv above:

            In the map-territory analogy: HV theories assume there is a territory and it’s deteministic but our map systematically leaves certain details out, MWI assumes that again there is a territory and it’s deteministic but our maps are randomly generated from it, Copenhagen (and more explicitely Quantum Bayes theories) assume there are only maps.

            So basically whereas MWI claims that there is actually an objective reality of what happens under the hood in order to give us these observations (in particular, the one that most naturally follows from the math, even though it requires you to bite the resulting bullet) and you personally only see a “random” result due to Born’s rule + indexical uncertainty, “Soft agnosticish” Copenhagen says something like “beyond the actual statistics you can observe, there is no truth of the matter”. There is no reality about what happens quantum mechanically (branching or no branching or whatever) except insofar as you can mathematically model it like so (with small scale branches and superpositions), and when it interacts with things, you may observe that it does so like so (according to the statistics that “collapse” would generate). It’s not saying necessarily that branches or collapse “really” happen, it’s just saying that that’s the mathematical model for what is happening and the only reality you can know is your observations.

            (This is slightly different from pure agnosticism, which just says “I don’t know”).

            Again, there are probably other subtle philosophical flavors different than the above (as well as for MWI too, not to mention other interpretations), and I don’t know if I’ve labeled them the same way as others would, but that’s my understanding.

          • jermo sapiens says:

            These versions of Copenhagen can actually be falsifiable from softer versions and/or MWI – e.g. if you postulate that collapse happens any time you get to a superposition with more than 1 million particles… well you could actually try to test that some day with good enough technology! (Personally, I predict we will not see such a thing, based on fuzzy heuristics of mathematical elegance).

            Thanks again for your comment. This is where I would like to see some experiments, to be able to tell what exactly causes a collapse to happen.

          • Shion Arita says:

            I think people get hung up on the whole ‘separate universes’ part of MWI, which might be a bit poorly named. I think it is accurate in that there really are other versions of ‘you’ that exist out there that for all intents and purposes don’t interact with you much, but the interaction is not exactly zero, it’s just very close to zero. Like an apple falling off a tree on the other side of the world has a gravitational effect on you, and this effect is negligible but not zero.

            My assessment of things is pretty much in line with lightvector. There is no moment of ‘collapse’ nor any moment where the worlds ‘separate’; there are only correlations, and there are events that have a greater or lesser ability to interfere with each other somewhere down the line. It’s easy to have a single photon interfere going through a couple of slits, but it is very difficult to have an entire macrosocpic system interfere with its counterpart after being influenced by a particle coming out of a decaying nucleus, because the number correlations that would have to be reversed is astronomical, so we don’t see it happen.

            I think it’s a bit confusing and misleading to call the other possiblilites other universes. Maybe a better term would be ‘corrleation bundles’, or something. Additionally, I think it would be a mistake to say that the other correlation bundles don’t exist or aren’t real. To me that would be a lot like saying that there’s a devil that hides the dinosaur bones. The math doesn’t treat any particular outcomes as special, so why think that only one is real? It requires no extra assumptions to believe that the version of you that sees the other outcome thinks similar to you exists in the same way you do.

          • viVI_IViv says:

            Thanks again for your comment. This is where I would like to see some experiments, to be able to tell what exactly causes a collapse to happen.

            Quantum superpositions have been demonstrated in a macroscopic object with a trillion atoms. As far as I know, this is the closest to Schrödinger’s cat as current experimental technology can get.

    • Metanoialgia says:

      (epistemic status: published a handful of papers in quantum optics before leaving academe to found a startup; your credentials probably beat mine.)

      I’m of the view that MWI is wrong, but I’m genuinely curious if I should update on an active researcher liking it. I’m not entirely certain I can pass the Ideological Turing Test for MWI, but the rationalsphere belief in MWI appears to be held in opposition to a misunderstanding of single-world, one I believe Scott makes here. Perhaps you can clarify my understanding.

      To summarize for non-QM people reading: The usual argument against wavefunction collapse tends to take issue with a) the existence of observers being important to the theory and b) the discrete phenomenon of collapse itself. These stem from thinking of the wavefunction as a realist object of some sort, which is essentially the central claim of MWI as I understand it. The single-world view, however, treats the wavefunction as fundamentally non-realist: it’s purely a representation of the observer’s incomplete knowledge, and while we are able to understand how a given state evolves, we are unable to ever know what is occurring “physically” other than through measuring observables, hence the centrality of observers.

      Stated another way, Bell’s theorem prohibits local hidden-variable / local realist theories. MWI bites the bullet with global realism, so a change in state splits the world in some sense; this makes wavefunction collapse look like adding Satan. Single-world picks the other branch of the dichotomy with local non-realism: even while our state of knowledge collapses, the “physical” quantum behavior can’t collapse, not least because it’s nonsensical to even talk about physical quantum behavior beyond observables; this makes adding realist foundations to QM look like adding Satan. Scott’s Section III becomes less clear.

      The meat of my viewpoint: this fundamentally seems like an argument for or against realism, and I think the Born rule tips the scales in favor of non-realism, even granting that it may someday be proven within MWI. My argument, in short, is: 1) Our ability to make physical measurements is limited to quantum observables via the Born rule; 2) Bell prohibits local realism, so we must choose between adding a foundational realist wavefunction that is impossible to measure in any way other than observables (i.e., the same prediction standard QM makes) or treating the wavefunction purely as an incomplete state of knowledge of a non-realist foundation; 3) Occam’s razor leads me to non-realism. I’d be genuinely interested to hear where you diverge from this analysis.

      Addendum (epistemic status: not whatsoever certain, and more cultural analysis than science): I think some of the rationalsphere MWI belief comes from our (largely accurate) heuristic that arguments for impossibility — e.g. “there’s no deeper explanation of reality than quantum mechanics” — are almost always wrong. I believe that heuristic fails here since QM is filled with mathematical no-go theorems. I also sometimes see “Copenhagen” strawmanned rather uncharitably in these discussions, so I think there may be some tribalism at play: Scott has argued before that rationalism is the belief in Yud as the rightful caliph, and the rightful caliph believes MWI. Again, see epistemic status.

      • Robert Jones says:

        I think you’re right that the essence of MWI (at least in Everett’s conception) is (a) realism and (b) including the wavefunction as a real object (perhaps the only real object). I’m a bit puzzled when you say that the single-world view treats the wavefunction as fundamentally non-realist and purely a representation of the observer’s incomplete knowledge. Don’t interference effects and Bell’s theorem contradict this? If it’s a representation of incomplete knowledge, doesn’t that imply there’s a real thing of which our knowledge is incomplete? That sounds like hidden variables.

        • migo says:

          “Don’t interference effects and Bell’s theorem contradict this?”

          The way I picture the Copenhagen view point is something like this: imagine a variable X represented by a probability distribution. Let that probability distribution evolve with time. Every time you look at X, you sample from its probability distribution and X is assigned a value (or a region in the distribution). The probability distribution of X is updated, and as before, continues to evolve with time until the next observation/realization. In the double slit experiment, when we look at a particle after it has gone through the slits, its probability distribution is interference-like (because of the wave-like nature of how the distribution evolves with time). If we obtain information on the location of the particle at the slits location, we make it re-start its probability distribution from that point (and it loses the interference-like shape at later points).

        • Metanoialgia says:

          Good point on my explanation of non-realism. I think I may have oversimplified that while editing the wall of text down. By incomplete knowledge, I mean that it’s more like a model of how reality evolves than it being a representation of reality that’s just incomplete, the latter of which would definitely be hidden variables. My claim is essentially Sidney Coleman’s: a wavefunction is a mathematical bookkeeping device for a collection of numbers that predict the probability of a future event.

          It’s essentially claiming that reality, whatever it may be, is both unknown and fundamentally unknowable. All that can be known is quantum observables. We can take measurements or construct a known state, parameterize a wavefunction equation with them, predict how it will evolve, and get a probability out of it, but we’re not making any fundamental claims about what’s going on under the hood. Agnosticism rather than ignorance, perhaps.
          It’d be circular reasoning to invoke the uncertainty principle here, but it’s a decent mental model.

          Interference effects mostly come from amplitudes being complex rather than real. I think Scott Aaronson has a post about that in the context of quantum computing, but I can’t find the link right now. Shouldn’t differ too much between interpretations other than what amplitudes themselves represent.

          Bell’s theorem and entanglement more generally are what pushed me away from MWI originally. The classic example is between Einstein’s “spooky action at a distance” (local realism) and MWI’s global realism where it can’t be action at a distance because everything is in fact the wavefunction. Of those two choices, MWI is honestly saner; however, this presupposes realism. If we add the real/non-real axis in addition to the local/global axis, entanglement ends up being rather more mundane.

          In the local non-realist explanation of entanglement, it’s all just correlations. In quantum optics, the easiest example is entangled photons from spontaneous parametric down-conversion: you hit a non-linear crystal with a single photon, and sometimes it splits into two identical photons going off in mirror directions, and they must be identical in some regards because of conservation laws. Why should we be surprised that both of these have the same polarization when they’re measured at their ultimate destinations? Did spooky action happen at a distance, or is this just a predictable correlation? The wavefunction is said to collapse when one is observed because the observer knows what he or she measured and further knows that both photons are correlated, so the observer already knows what will be measured for the other photon. Purely representation of knowledge, no physical change occurs when the first one is measured.

          ETA: What distinguishes the entangled photons from hidden variables is that it’s still non-deterministic and subject to the probabilities predicted by QM. If we’re measuring in the vertical polarization basis, we can’t choose for them to both be polarized up or both be polarized down — 50/50 odds either way on that one, and it’s genuinely random. But QM also guarantees that, whichever random choice is chosen, they’ll both have it. Pure correlation.

          • migo says:

            “In the local non-realist explanation of entanglement, it’s all just correlations.”

            A little bit more than only correlations, no? If you have two entangled particles and you induce a change in the probability distribution of some value of one, that change will be induced in the other one as well. Bell’s theorem right?

          • uau says:

            In the local non-realist explanation of entanglement, it’s all just correlations. In quantum optics, the easiest example is entangled photons from spontaneous parametric down-conversion: you hit a non-linear crystal with a single photon, and sometimes it splits into two identical photons going off in mirror directions, and they must be identical in some regards because of conservation laws. Why should we be surprised that both of these have the same polarization when they’re measured at their ultimate destinations?

            Just “being correlated” could be trivially explained by hidden variables (and I think that’s what “must be identical” would argue for). IMO this example isn’t worth anything unless it explicitly addresses how “being identical” in the sense of having same properties (hidden or not) isn’t enough, as shown by Bell’s theorem.

          • ec429 says:

            Did spooky action happen at a distance, or is this just a predictable correlation?

            That’s all very well, until you start making bits of the experiment depend on whether other bits made — or will later make — a measurement. I have yet to see a to-me-satisfactory Copenhagenist explanation of the delayed choice quantum eraser, an experiment which seems to me to drive collapse interpretations off into the Wigner’s-friend weeds. The only way to make it work is to keep the wavefunction around until the end of the experiment, so why declare that at that point all these correlations (and in MWI they really are just correlations in the wavefunction, inasmuch as the wavefunction doesn’t factorise in ways it would without entanglement) turn into a single randomly-selected result?

            The Universe would have to do more than just bookkeep the correlations, it would have to know whether it was still possible for any further-delayed choices to affect the result, and if so keep the result ‘provisional’ until then, going back and rewriting history to fix it. If we point some of the beams at outer space instead of the detectors, the superposition potentially has to stay around at least until the particles cross the cosmological event horizon, just in case someone with a convenient mirror reflects them back at us. AIUI we can many-worldsify an arbitrarily large experimental setup, potentially including conscious observers, by sending the idler photons on a sufficiently long journey.

          • Harry Maurice Johnston says:

            @ec429, hold on, are you claiming that applying Copenhagen gives the wrong answer in the delayed-choice quantum eraser experiment? That would be astonishing if true.

            Anyone actually done the math?

          • ec429 says:

            @Harry Maurice Johnston AIUI, it gives the right answer, but only if you already know the right answer and select the point at which collapse happens to make it give that answer. (Which can be arbitrarily late if the idler photons are sent on a long journey.)

          • Harry Maurice Johnston says:

            Yeah, that would count as giving the wrong answer. I’ll keep this in mind as something to look into, if and when I can spare any time for it.

        • viVI_IViv says:

          If it’s a representation of incomplete knowledge, doesn’t that imply there’s a real thing of which our knowledge is incomplete? That sounds like hidden variables.

          Not necessarily. If you assume that the world is fundamentally deterministic, then you need either some kind of global hidden variables or branching observers (MWI) in order to reconcile determinism with the observation of apparently irreconciliable stochasticity. If you give up determinism, then all you have are observations with certain statistics, and in this picture wavefunctions (or more generally density matrices) are representations of your knowledge on these statistics.

          In the map-territory analogy: HV theories assume there is a territory and it’s deteministic but our map systematically leaves certain details out, MWI assumes that again there is a territory and it’s deteministic but our maps are randomly generated from it, Copenhagen (and more explicitely Quantum Bayes theories) assume there are only maps. Think of a map of a fantasy world: you can study it and find patterns in it, even if there is no “real” territory it represents.

      • Bugmaster says:

        The usual argument against wavefunction collapse tends to take issue with a) the existence of observers being important to the theory

        I am not a physicist, so I never really understood this part. Doesn’t the word “observer” just mean “anything that interacts with the wavefunction” ? So, the “observer” could just be a photon or something, couldn’t it ?

        • jermo sapiens says:

          This is also where my understanding of QM hits a brick wall. I’ve researched it to the point where I think this issue is not solved, but I’m not aware of any research attempting to solve it. I believe that the dominant paradigm of “shut up and calculate” essentially discourages any research into that question.

          But it’s not going away, and the MWI is one of those attempts at trying to figure out what’s under the hood of QM, although it deals with what happens when the wave function collapses, not what causes the wave function to collapse.

          The current understanding is that a measurement happens when there is a “macroscopic effect” (of course this just begs the question of what is a macroscopic effect, specially when macroscopic objects are made of quantum elements).

          This short video does a much better job than I ever could discussing it.

      • eigenmoon says:

        even while our state of knowledge collapses, the “physical” quantum behavior can’t collapse

        If we observe an intelligent robot who observes some particles, does the robot’s measurements collapse the wavefunction even when we haven’t yet observed the robot’s brain to collapse our state of knowledge? I understood Copenhagen as a “yes” but your description seems like a “no”. But then isn’t that basically saying that the robot lives under MWI rules but we don’t? Why would that be?

        • entropy says:

          I don’t know what a Copenhagenist would believe about this. In fact, I suspect that Copenhagenists themselves don’t quite know what they believe. That’s my biased opinion, anyway. This stuff was all formulated almost 100 years ago before we even had Bell’s theorem.

          Your thought experiment is basically a rephrase of Schrodinger’s cat (the robot being the cat). From the viewpoint of an agnostic, it doesn’t matter much if the wavefunction collapsed when the robot observed the particles or when you did. The probabilities work out the same either way. You could also put another person in the box rather than a robot; the analysis is the same. “Wigner’s friend” is the name of that thought experiment.

          MWI puts the whole universe in the box. No wavefunction collapse at all (unless God observes it from the outside?)

      • Jiro says:

        the rationalsphere belief in MWI appears to be held in opposition to a misunderstanding of single-world, one I believe Scott makes here

        The “rationalsphere” belief in MWI is based around Eliezer Yudkowsky. If Eliezer had not announced from on high that MWI is the only valid interpretation and that any physicist who doesn’t believe in it is incompetent, we would not be here arguing this.

        • ec429 says:

          Datum: I believed in MWI before I’d heard of Yudkowsky. Then again, before I’d heard of MWI I was of the opinion that QM was trivially false, because it (or rather, the Copenhagenism which was all I’d encountered) was logically and philosophically incoherent*.

          Perhaps it is the case that MWI-fans are naturally attracted to Yudkowskianism, as we share his intellectual prejudices and preconceptions, rather than that Yudkowsky fans follow him into MWI.

          * There’s a pun on ‘decoherent’ to be made here, which I almost resisted, but then this footnote happened.

      • MicaiahC says:

        So, I didn’t do quantum optics and REALLY sucked at Condensed Matter in school, so please correct me but:

        The ENTIRE appeal of Many World’s is that it turns the BORN rule into a subjective non-realist part of the theory and the wavefunction into the realist part. The reason why the Born Rule occurs is that you locate the subset of worlds you are in after you gain information about which branch you are in, but this also has the benefit that you don’t throw the wave equation down the stairs.

        What on earth is the IS evolving if the wave function is not literally real? What is evolving forward according to the QMed versions of the energy operators, and why the heck would that couple to the system OUTSIDE of the one under observation? Saying that the wave function is non-realist seems entirely insane to me because we see wave equations all the time and the method via which waves evolve in non-quantum contexts ground our intuitions. So you’re going to have to explain why, for say, normal ocean waves, the wave equation written down does not reflect physical interactions in the system or specify *what* about the Schrodinger equation itself tells you that it’s a non-realist one.

        Also, since you did quantum optics and are a Copenhagenist, what is the account of what happens during the delayed choice quantum razor experiment? As a MWIist, it seems to be pretty obvious: the system evolves according to the wave equation, and each different substate of the wave function represents a possible outcome of the experiment (because the greater world entangles / interacts with each photon in a different manner in each subspace of worlds). But it seems to be exactly troubling for any “objective collapse” theory, because it seems you have to throw away locality in order to claim that some sort of collapse happens. You’re a subjective collapser. What is going on?

      • Thegnskald says:

        I suspect the rationalsphere tends toward MWI because of Douglas Adams, rather than EY.

        • Jiro says:

          EY vastly oversold it. Obviously there is no way to prove where it comes from, but search for lesswrong.com and “many worlds”

          • Thegnskald says:

            I was around in those days, and already regarded MWI as the most likely interpretation, albeit for what I now understand are erroneous reasons.

            My current understanding still suggests MWI is the most likely explanation, with matrix mechanics being a close second. Matrix mechanics permits a novel hidden variables approach: The hidden variables are probabilities. It thus neatly evades Bell Inequality problems. Bananaworld: Quantum Physics for Primates describes the mechanisms for this, if you are curious.

            But I can see the conceptions that made MWI feel more intuitive than it actually is, and they derive from Douglas Adams, not EY. I think EY was subject to the same preconceptions that make rationalists generally more likely to find MWI intuitive, rather than creating those conceptions.

            Which is to say, I think this is a case of common conceptual roots rather than origination.

          • Jaskologist says:

            He also put forth belief in Many Worlds as one of the 3 criteria by which to judge how correct/rational somebody is. So he basically taught that if Many Worlds turns out to be wrong, the whole of the Sequences are garbage.

      • sovietKaleEatYou says:

        I think this is a reasonable amound of scepticism, but note that the Born rule does follow from MWI without any additions of Satan. The way I like to think about it is this: the wavefunction assigns a complex number to each basis vector, which is (very roughly) a possible universe. Now say that Adam run a simulation of the wavefunction on an infinitely fast computer. He stops it at some time t_1 and tells Beth one of the 1/10^100 fraction of universes with the biggest (absolute value of) coefficients. Now under realistic thermodynamic assumptions (essentially, that entropy starts out low and stabilizes much later than t_1), Beth’s best bet to guess a universe which will be one of the highest-coefficient universes at time t_2 is to run Schroedinger’s equation on Adam’s output and output the highest-coefficient “conditional” universe. Now if she also wants to give a probility that she attaches to the several highest-probability conditional universes, these are precisely given by Born’s rule (square of the amplitude). In fact any reasonable way of sampling “high-coefficient” universes will limit out in a reasonable thermodynamic context to the Born rule: this is a sort of quantum “law of large numbers”.

        In particular, if Adam runs the simulation directly to time t_2 and finds a universe with someting that resembles replicating life, the most likely scenario is that there was an antecedent universe at time t_1 with high coefficient with precursor lifeforms, which ran some primitive “program” to increase the prominence of universes with more copies of themselves. They could achieve this, perhaps, by running a crude simulation of their environment and manipulating it in order to best reproduce. This progarm would very likely involve a crude approximation of Born’s rule for predicting outcomes of actions. In our universe a very good approximation of Born’s rule on large scales is Newtonian physics, so it is natural that a mostly-Newtonian model of their surroundings suffices for most living organisms. Only very elaborate creatures like ourselves need to develop any higher-order versions of it.

        • Harry Maurice Johnston says:

          Now say that Adam run a simulation of the wavefunction on an infinitely fast computer. He stops it at some time t_1 and tells Beth one of the 1/10^100 fraction of universes with the biggest (absolute value of) coefficients.

          I think Adam is playing the role of the Devil in this model. (But perhaps it seems more elegant if you’ve already accepted the simulation theory.)

        • smocc says:

          This looks similar to Deutsch’s argument for the Born rule, by which I’m also not convinced. How does Adam tell the coefficients of wavefunction’s decomposition without taking an inner product in some sense? Why should the coefficients tell us about what “happens”?

          It is interesting that once you have an inner product and coefficients the Born rule kind of pops out. But in my mind the really interesting part of the Born rule is how to go from the wavefunction to “what happens.” When you start with the assumption that coefficients in a particular basis tell you about “what happens” then you are eliding the really difficult part.