[Epistemic status: Any time I make an anthropic argument, you should probably interpret it as trolling]
Sean Carroll argues that the simulation argument is false.
The simulation argument posits two kinds of universes: “high-level” universes that can simulate other universes, and “ground-level” universes that can’t. By the terms of the simulation argument itself, most universes will be at ground level, since every high-level universe can simulate many ground-level ones. So (says the argument) we should expect to be at ground level. But the simulation argument itself hinges on our observation that it looks like our universe is capable of simulating other lower-level universes. So apparently we aren’t on ground-level. So the simulation argument is probably false.
(I might be summing it up badly. Read the actual post for more.)
Suppose Carroll’s reasoning is right. What would a ground-level universe look like?
It would have to be pretty weird. It would have to ban the creation of Turing machines – since with enough time and resources any Turing machine could be expanded into a full-scale simulation. But Turing machines are pretty simple, and brains supporting conscious observers are pretty complicated. To have conscious observers but not Turing machines – well, once again, this would have to be pretty weird.
Brains would have to run off a science different from the local science accessible to in-universe researchers. Probably they would be run remotely, in the simulating universe, and then the results beamed into the simulated universe with no regard for the computational rules of the simulation. Maybe an alien dissecting a fellow alien’s head would just find a perfectly featureless crystal with no internal structure, which is observed to inexplicably send nerve impulses to the rest of the entity’s body. Such aliens might invent psychology, but never neuroscience, and even if they speculated about it, it wouldn’t matter – attempts to “simulate” neurons would fail, their workings forever beyond locally accessible physics. Even if they completely mastered their local science, their brains would remain a mystery.
I used the phrase “conscious observers” above. There are versions of anthropics that work for p-zombies, but we’re not p-zombies and we don’t have to use them; we can do anthropics conditioning upon consciousness. Try that, and the simulation argument doesn’t exactly depend on a ground-level universe where further simulations are impossible. It depends on a ground-level universe where further simulations containing conscious observers are impossible.
This changes the scenario a bit. Now people in ground-level simulations can expect arbitrarily complex physics, physics that allow the creation of as many Turing machines as they want, but which can’t possibly explain consciousness. They should be able to master every aspect of the universe around them except consciousness, which try as they might will remain refractory to their simulations. Consciousness will make perfect sense in the physics of the universe above theirs, but the simulators will have excised all consciousness-related rules from the ground-level sim. Try as the simulated scientists might, it’ll remain a mystery.
If Carroll’s deconstruction of the simulation argument is right, then the more trouble we have explaining consciousness, the more that should push us to believe we’re in a ground-level simulation. There’s probably a higher-level version of physics in which consciousness makes sense. Our own consciousness is probably being run in a world that operates on that higher-level law. And we’re stuck in a low-resolution world whose physics doesn’t allow consciousness – because if we weren’t, we’d just keep recursing further until we were.
I don’t care about Carroll’s argument, I just really really love this one.
It opens a good door way to thinking about the question “Is consciousness just a result of thing we basically understand? Do we just need to figure what fiddly bits we’re missing and put it all together? Or is there a big gap in our understanding of how consciousness is produced?”
This question is important to everything from AI to ems. But I haven’t seen it address on the basis of hard data rather than intuition. Of course it can be hard to make a reasoned argument on this before you’ve solved the consciousness creation problem, but it would be neat to see more of an attempt.
The obvious alternative is, “consciousness is not a real thing, it’s just a name we give to a loosely defined collection of phenomena”. By analogy, “fruit” is not a thing, despite the fact that we find it quite useful to label things as “fruit”. No matter how closely you zoom onto an apple, you won’t find a little label that says “yes this is a fruit” on its atoms.
This is sort of my view – the hard problem of consciousness is convincing people that there’s no hard problem of consciousness.
This argument always becomes circular. I have heard a wacky(but cool) idea suggested that the only difference between p-zombies and conscious people would be an inability to understand the hard problem.
Yup, I endorse this. But actually this would make p-zombies impossible, as absence of consciousness would have an observable consequence. You would just have regular zombies.
But bugmasters comment read to me like a critique of that level of rigour, requiring things to be pre-labeled in order to qualify as real.
Or if the point is just about consciousness, being a ragbag rather than a unitary thing….then it doesn’t help at all, because some of the components of the ragbag are still hard problems individually, so it doesn’t lead to an elimination of hard problems ….that approach has effectively been tried , we have a breakdown into qualia, self awareness representation, higher order thought and so on.
But by that analogy, nothing is a thing; there’s just bunches of atoms in clumps and fundamentally energy.
Which is fair enough, but not very useful for anything. “Why is a raven like a writing desk? Because there are no labels on their atoms to differentiate them!”
You might just as well throw your hat at it and say “I declare my smart phone is conscious right now” because if there’s no “thing” there that can be classed as “this is consciousness”, just emergent properties (or whatever you want to label it), then why not?
(I have half a feeling the term I am groping for is ens and half a feeling I’m completely misusing it).
As someone who does quantum mechanics for a living I find it very hard to take another view.
I find it difficult to believe QM has much to say about the application of ordinary language terms.
I can see how you would think so, I mean, it’s only the fundamental description of all matter in the universe, that’s a pretty narrow and specific theory. You could say it’s my physicist privilege showing, but think of it like this: the Ancient Greeks wondered what matter and things were made of. Physics has pretty much completely answered that question, and the answer fits on two pages. From a QM perspective, “things” are “just” complicated bound states that chemistry and its applied subdisciplines like biology [hyperbole] classify and describe.
I think what Deiseach wants to get at is the Cluster Structure of Thingspace. I think that’s a very natural idea to sometimes over-eagerly reductionist physicists, and like I said, knowing QM it’s hard to take any other view. Once you know QM, you know that, e.g., the honest answer to “what is water” is “it’s a quantum mechanical bound state involving one oxygen and two hydrogen nuclei [themselves bound states of quarks and gluons], 18 electrons, and the electromagnetic field “.
Anyway, it feels good to be on the bottom of the reductionism chain.
Says a guy who calls himself “I”.
“bottom of the reductionism chain”, eh? Then answer me this:
What the heck is an electron??
From a QM perspective, “things” are “just” complicated bound states
And that has nothing to do with the applicability of ordinary language terms. I am not saying that QM is incorrect, or incorrect about what is fundamental. You are appealing to a rule one the lines of “”thou shalt not use a term unless its referent is one of the fundamental constituents of the universe”. That is not a rule of QM, nor is it a rule of English.
Heck, you could even make a quantum operator to count the number of chairs, or fruit, or whatever. It would be a stupidly complicated operator and would depend on what exactly you mean by what it is you’re counting, but there’s plenty of room in the theory to accommodate complex objects.
So, don’t blame QM for the idea that there’s only bunches of atoms in clumps.
Can someone please define consciousness for the purposes of this discussion, please? I feel lost.
I think the beauty of this is that the question of whether the subjective experience of consciousness is any type of a “thing” is entire irrelevant to the simulations puzzle. It is the outwardly observable behavior that we are interested in here. In fact the very specific behavior of spontaneously starting to talk about simulations and take self-directed problem solving steps to create simulations, especially those that include self-starting creative agents that can loop through the process again one level down.
In fact the condition where “leave the desire to create simulations out of the source code” is not a viable solution to the issue of the agent creating simulations, because the agent will spontaneously come up with this type of goal for itself, is a good working test of whether you’ve come up with a conscious agent.
Fruit is a thing, and your argument is irrelevant, because “being fruit” and having “a little label that says ‘yes this is a fruit'” are not remotely close to being the same.
By which criterion of “real thing”, nothing is real except the universe as a whole. OK, now we need new word(s) to cover the lost territory that used to fly the flag of “reality”.
Edit: Deiseach and AncientGeek and entirelyuseless kinda beat me to it. I’ll leave my response in; it’s complementary.
Think intuitively for consciousness.
I say combine the statements “I think therefore I am” and “I feel therefore I know I exist…in a different way that’s hard to explain”
I think both are prerequisites for consciousness as we understand it.
Most thought too far beyond that starts seeming terrible.
The simulation argument seems basically premised on the idea that the singularity will happen and we’ll have ~infinite computing power. We certainly can’t simulate anything like our universe, or even a one-cubic-centimeter subset of our universe, with current or anytime-near-future technology, and if simulations are impractical then the argument loses its force. Thus, being a singularity skeptic, I’m inclined to think the whole simulation question is on a foundation of sand.
That said, I’m somewhat amused at how this hypothesis is basically just dualist theism.
I don’t think it requires the singularity. Just technology that keeps on improving arbitrarily far into the future. If Moore’s Law or something like it keeps up, “arbitrarily far” might not be more than a century or two.
Let’s suppose we have an alien society which has experienced exponential growth of computing power per unit area of chip for some time, and that this alien society decides to simulate a world wherein the inhabitants of planet “Earth” develop their own computing technology, which also experiences exponential growth of computing power. At a certain point, isn’t somebody going to run out of the energy required to run so many of these chip-based Turing machines simulating the computations of so many exponentially improving Turing machines? Isn’t a fuse going to blow somewhere?
I’m not even convinced that there’s anything special about Turing machines. Is there any evidence that a general model of reality can ever run effectively at all scales without just being a snowglobe that “models” the contents of its globe?
The argument seems like a classic case of the motte and bailey to me. “Of course there will be a ‘singularity'” [Singularity, here, meaning that computers become smarter than humans], “so of course computers will improve themselves to the point where they’re capable of completely simulating humans” [Singularity, here, meaning vague super-powers of computation].
I’d expect reasonable corollaries of that argument to be: there are, proportionally, close to no ants. The vast majority of ants are just simulations of ants running on university clusters. There are, proportionally, no animals of any kind but us: all animals are just super-computer simulations.
Am I missing something? This all seems silly.
Keep in mind that the simulated universes don’t have to be running in realtime. With access to unlimited memory storage, an arbitrarily slow computer which consumes arbitrarily low amounts of energy could simulate an arbitrarily large universe. The beings in the simulated universe would have no idea that one second of their existence took much, much longer to be simulated on the outside.
(disclaimer: I find simulationist arguments fun and thought-provoking, but without any real evidence to back them up they’re nothing more than that. I’d be curious to know how many people actually take them more seriously.)
The simulation consumes an arbitrarily low amount of power, but not an arbitrarily low amount of energy. Every simulated second you have to change the state of your simulation, and there is some minimum amount of energy you have to expend for every change of state, for every pair of initial and destination states. The minimum can’t change just because you move slowly.
EDIT: I guess expend energy should be more like increase entropy, but that limitation is contingent on non-decreasing entropy being a fact of the simulating universe.
Wait, why are you assuming unlimited memory storage? That’s a very real problem for engineering a universe simulator.
As I understand it, reversible computing shows that the minimum energy expenditure does decrease the slower you go; and Freeman Dyson has made a famous argument that as the universe cools, under certain conditions you can do an arbitrary number of computations with a fixed amount of energy (https://en.wikipedia.org/wiki/Dyson's_eternal_intelligence) which shows that it’s at least possible to imagine this sort of scenario.
It seems to me that living forever in a reversible computer requires an infinite amount of memory, since any reversible system will have it’s own version of the second law, and its pretty easy to argue that infinite memory is a no-go with physics/cosmology as we know it.
There’s no reason you would necessarily need infinite memory if you were simulating a fixed size universe with reversible physics on a reversible computer. If you have some function f that represents moving your universe’s state forward one “tick”, you can definitely get by with enough memory for two copies of your universe and whatever memory f needs internally to do the evolution. “Whatever memory f needs internally to do the evolution” may be unbounded depending on your choice of function, but there are plenty of functions with bounded memory usage to choose from.
I don’t see any use for slow simulations other than proofs of concept: see, that simulated worm.
Besides, if you assume a cascading multiverse of slow simulations, for each level added, the probability of the master universe running into its own heat death increases a ton for each level. In sum, if an universe out there is using slow simulations, we cannot assume there are a lot of them, and thus a lot of people living in them.
Hell, nevermind the heat death; if a universe is a slow universe, it is probably a proof of concept, and thus it will be shut down as soon as it proves its concept. And that point would be as soon as an underlying civilization runs its own simulation. So on slow simulations, you probably only ever hit a depth of 2 simulations max.
Simulation arguments are at least interesting as exercises to show, for example, that the universe has been “created” without the need to resort to God existing, or being omniscient/omnipotente/blabla, if there were a God. I like them as arguments for discussions with theists/believers, just as ways of poking holes in some of their arguments (ie. there could be a supernatural wolrd/dimension/level, but this absolutely does NOT imply the existence of God). Etc…
Unless, of course, somebody is lucky enough to live in a universe where the Second Law of Thermodynamics isn’t a thing, but this is just getting silly.
Clearly, we should just simulate a universe without a Second Law of Thermodynamics, and then ask the people in that universe to simulate our own universe but without the Second Law of Thermodynamics. While we’re at it, we can ask them to get rid of Piers Morgan.
If Liverpool win the League this year, I’ll know we’re living in a simulation 🙂
(We’re selling or otherwise getting shot of everybody, even Kloppo doesn’t know who if anyone he’s buying during the transfer window, and we managed to beat Barcelona 4-0 then lose 0-2 to Burnley. Whoever is programming this simulation is plainly having a laugh).
No, you’re not missing something here, you’re hitting on a key point of Sean Carroll’s argument: finite resources. Scott’s remark
misses the boat. What Sean Carroll says is
Dyson’s hypothesis notwithstanding, that seems like a good assumption to me.
Moore’s law concerns silicon component density. It can’t sustain even another decade of the original tempo (~6 doublings) without components running into the atomic scale, at which point current understandings of physics prohibit its continuation.
Other metrics which have sustained a Moore-like trajectory, such as single-threaded clock speed, have already flattened out.
The difference between current technology and that which would allow us to run a cubic-centimeter-scale universe sim is counted in the dozens of orders of magnitude. Not to mention that we would first have to actually figure out the grand unified theory, on which progress seems decidedly stalled.
A civilization that could successfully run universe sims would be so qualitatively different from our own that I don’t think it’s a stretch to refer to it as post-singularity.
Price-performance (FLOPS/dollar) continues to improve, though. This is more relevant.
“Dozens of orders of magnitude” may be doable as we scale up to higher Kardashev levels. To mangle a quote, there’s plenty of room at the top.
With classical chips, we have already pretty much hit the wall, and Moore’s law in its original form is done for. We can’t go smaller without running into quantum effects (currently, the wires on a chip are 50-60 atoms wide), and we can’t go faster (or wider) because the signals have to get around the chip within one clock cycle (which, at 3 GHz gives us at most 10 cm to work with).
The ways out would be to go parallel (which we are doing already, but not every problem can be parallelized easily), to go fully three-dimensional (which would be prohibitively expensive, and a nightmare to design cooling for), or to find some clever design for gates and wires that actively uses quantum effects instead of working around them. But that is far into the future, and at the moment, we’re kinda stuck. Getting cheaper is the only direction we are still improving on at the original speed.
Unless your computer is a series of quarks described by quantum mechanics…so our universe is kind of running a simulator that has QM as the bus (as in CPU-BUS). But it has been a long time since I was 18 and could theorize about such things.
I guess the question is, what are you trying to simulate ? If you are trying to simulate a very simple universe, then we don’t need to wait for the future, we can already run Conway’s Game of Life on our phones. On the other hand, if you want to accurately simulate the entire Earth, you need a computer the size of the Earth. This puts an upper limit on what you can realistically expect to accomplish.
You don’t need a computer the size of the Earth to simulate the Earth, that much we know – the (local) Universe is very compressible. That is part of the argument for why it might be simulable. It needn’t be compressible.
Another interesting property of the Universe to figure out is whether it is holographic. If it is, its simulability becomes much more interesting.
What are you alluding to when you say the universe compressible? That sounds like a big claim. After all, given some compression scheme, how do we decide that it has captured the essence of the universe? What is and isn’t essential seems like it would be an ill-defined thing.
Compressibility sounds like a strong claim, but thermodynamics seems to allow exactly that. If you don’t care about what happens to individual atoms at the quantum level, you can most of the time just replace it with the average and be happy. Your simulation just needs to be smart enough to notice when individual atoms do matter.
Does modern micro- and nanotechnology screw with the computers of our matrix overlords? I sure hope they don’t react to the sudden lag the way I react to a catsplosion in Dwarf Fortress 🙂
When I say the Universe is compressible I mean it is “mostly empty”, or equivalently, “low entropy”, or equivalently, the number of bits required to describe completely some 4-volume is generally much less than the maximum number of bits that might be needed to describe such 4-volume, for most 4-volumes within the Universe.
This is trivially true, and need not have been true, so it is a somewhat surprising property of the Universe.
A static representation of the universe is highly compressible; I’m not so sure about compressing the dynamics during a simulation. All that empty space is filled with fields and wave functions and virtual particles that seem to actually matter to the end result.
If I’m in charge of simplifying physics for efficient simulation, I want to know up front that all those empty grid cells are really empty. Not, “…and after calculating the wave functions of every particle that could possibly have passed through cell (X,Y,Z,t), summing and collapsing the ensemble, we retroactively determine that it was empty after all. This time.”
And if we have to have quantum mechanics work like that, I think we can increase the Planck scale five or ten orders of magnitude and tweak some of the other parameters so that physical chemistry will still work well enough for Life As We Know It. I’ll defer to Robert Penrose as to whether consciousness would still be possible if the Planck length were 1E-30 meters :-)
Possibly thread drifting, but something like this seems to be the unstated operating principle of Vernor Vinge’s “Zones of thought” universe. The more matter in a given region, the more “compressed” the simulation becomes, so the closer you get to the galactic core, the more technology stops working.
Suppose your bag contains 10 oranges. This is one way the simulation could represent that datum:
This is another:
The second is a lossless compressed representation. It’s basically what “zip” and similar programs do.
Now consider that everything can be represented as numbers (e.g. the position and speed and other physics data of every particle). You can just string numbers together to represent the full state of the universe; suppose our representation is like:
position 7, 7, 7: quark: charge 1, spin 2, mass 5…
Then we can convert that in a single number, like:
And this single number can be represented in binary:
Now we define some data format that allows us to say things like:
3× (5×0, 3×1)…
That is, convert the entire universe into a very very very long string of 0s and 1s, and then change sequences of 0s and 1s for code that says “three zillion 0s”, “one hundred 1s”, etc. This is a very basic approach to compressing the universe.
But… That’s wrong. Exponential growth for a finite period of time yields a finite result. Not an arbitrarily high result. You can’t just handwave this.
Arbitrarily far into the future might not happen: the universe is expanding faster and faster, and thus our technology might not reach enough matter/energy to do much simulation.
Unless the cosmological constant itself increases, there is no reason to think expansion will pull apart already-gravitationally-bound structures. It could limit our arbitrarily-far-future descendants to one supercluster, though.
Yeah, it does in places uncomfortably sound like “mysterious beamed-in consciousness” or “inexplicable and irreproducible consciousness = soul”, which I’m not sure at all anyone wants to use as an argument for religion (I really, really hope some bright spark apologist doesn’t get wind of it and try to use a mangled version of it as a ‘proof’ for God).
Why not? It seems at first glance a little like “god-of-the-gaps,” but (as a Christian) it seems to me like it could be developed into a fairer argument than many…
I (an ex-Catholic) used to use this exact reasoning until I realized that “God!” was functioning as Mysterious Answer here. It doesn’t help you make any predictions, and we don’t have any outside evidence that suggests that “inexplicable and irreproducible consciousness” implies the existence of a god.
Sure, no “god-of-the-gaps” arguments let you make positive predictions, but more detailed theories along this line (e.g. Penrose, whose theories I’ve glanced at but little more) could let you make a few predictions.
And no, this doesn’t necessarily imply God’s existence, but it implies the existence of something beyond the material world.
Penrose’s theories of consciousness don’t involve any kind of gaps; he was just arguing that the brain used quantum mechanics (in a very specific and knowable way) to perform computations that are beyond what is possible with a Turing machine. His theories made a number of (now falsified) biological predictions, but there was never any magic. In fact, his objective reduction theories were created for the purpose of removing randomness from the equation (which is most inescapable with existing quantum theory), as well as to allow the computation of non-Turing computable functions (also not allowed in standard quantum mechanics).
For a moment I read that as “god-of-the-apps.”
Sorry, carry on.
I dislike “god of the gaps” style reasoning because God is not sneaking in through the cracks we graciously leave Him in our grand theories of everything, He is the ground of being. That’s why the Thomists are jumping all over the Intelligent Design guys with “This is a lousy argument”, as far as I understand the state of play 🙂
Also, theologians doing science always make the most wince-inducing hash of things. They pretty much reliably don’t really understand the science, they have their own versions of theology that can be out to lunch – you might think it’s the conservative “Bible says it’s so!” types who do this, but liberal theologians are just as bad – I’m particularly thinking of John Shelby Spong here, but not to be kicking the Episcopalians alone, for (ex-)Catholics look at Matthew Fox. Yeah, moving right along…
And, as the joke has it, Christian culture is always about fifteen years behind popular culture. So these kinds of “arguments from science” generally are as behind the times, so it’s quite easy for the scientists to point out that actually, the field has moved on since then and they’ve got an answer to that and the new big interesting question is quite different.
Basically, it’s “cobbler, stick to your last”. Someone with shaky theology and a misunderstanding of the topic throwing together an argument with a shiny veneer but no depth is not going to do anyone on either side any good. The non-religious will simply say “This just proves the religious have no idea what they’re talking about, and indeed this undermines their argument since we do have an explanation for the phenomenon” and the religious will fall into various pits (e.g. “faith isn’t based on reason” or the like – remember, kids, fideism is a heresy!).
St Augustine got there before me in A.D. 415 on “The Literal Meaning of Genesis” (bolding mine):
This is true, and it really goes for apologists of any stripe. Amateurs defending e.g. established science or some philosophy with bad arguments can do the same damage when they meet someone not sympathetic to their position. It’s all too easy to slide unconsciously from “Your arguments for [Christianity, evolution, consequentialism] are terrible” to “Arguments for [Christianity, evolution, consequentialism] are terrible”.
These comments are why I missed you, Deiseach.
Like many arguments for God, it would only get you as far as deism anyway.
>(I really, really hope some bright spark apologist doesn’t get wind of it and try to use a mangled version of it as a ‘proof’ for God).
Reveal unto me the horror, the horror, O Other Brad!
Whodunnit and where and when?
I did it.
In the billiards room. With the candlestick.
Seriously, though, this line of reasoning has come up in discussions with friends of mine before I even read this post.
Its best to not think of computing power as this general thing, but just stringing together algorithms together to get a result.
I am really interested in what we can do with the proper blend of quantum/photonic/massive parallelism computing. Multiple classes of problems when done with massive parallelism go from O(nlogn) to O(logn), which is effectively infinite for most real world problems.
The assumption behind the assumption is that there is a broad similarity between embedding and embedded universes. A simulationist could argue that we have been placed in a universe where simulation is unrealistically difficult to keep us fooled…a variation on “who taught you thermodynamics’. But it isn’t much of an argument , because it assumes simulation.
Yeah, that was bugging me the whole time.
If the physics of an embedding universe can’t be simulated on a Turing machine, then Turing machines can be part of the simulation without any infinite regress. Same with the whole tangent upthread about Moore’s Law and the energy demands of a simulation–if you’re in an simulation then you have no idea whether “energy” is even a thing outside the simulation.
If the universe upstairs is radically dissimilar to ours, what is the point of talking about a “simulation” in that universe? Or trying to apply any of our words and concepts? Such thoughts get you nowhere very fast.
I love SST! doesn’t matter how fast I try to think about an issue, someone has already thought it (which is also frustrating as hell, btw).
Indeed, I was already at the point of equating the simulation argument with dualist theism (and being a dualist theist -oh, the horror! I know- having to accept an argument I’ve always found silly), but then infinite regression saved me: the high-level universe where the simulation is run can in turn be a simulation run by a higher level one, with its own subset of not-entirely-explainable by physics areas of reality that are indeed “provided from above”, and on and on and on it goes, so for parsimony’s sake it is best to go back to the unity of the mind designing the whole shebang.
Again, I humbly recognize such view would be, to say the least, not very popular here.
Wouldn’t it be impossible to simulate something the size of our universe within our universe? We’d need as much information as is currently in the universe to create the simulation, and then more than that to actually run it. It seems like any simulation of a universe would have to be smaller than the universe simulating it, and that would prevent any sort of infinite regress.
Just because our universe tentatively appears finite[way more footnotes than worth expounding] doesn’t mean that a hypothetical universe must be.
And when in the business of imagining a simulated universe much different than our own, there is no reason why computation couldn’t be offloaded with a kernel call. “We’ve discovered a new quantum particle that, if grouped together with other particles of its kind such that their quantum states encode a boolean equation, when struck with a photon they decay (in constant time) such that the emitted particles encode a satisfying assignment! Not only is P=NP, but P=NP=O(1)!!”.
It would also be possible to imagine offloading storage to the kernel, too. Say, a particle that stores and re-emits information, but we’ll leave that as an exercise for the reader.
Hell, when in the business of imagining non-simulated universes much different than ours there is no reason there couldn’t be instant computation or infinite information density, or whatever.
If we’re going to be making up something outside the universe that runs it and has entirely different physical laws, we might as well just admit we’re talking about God.
(Caveat: I have not read Bostrom’s writings on the simulation argument in detail. If my question is addressed therein, a citation or link or something will suffice. Thank you.)
So… ok, a high-level universe can simulate one or more (possibly many) “ground-level” universes. But can it not also simulate other high-level universes?
And if it can, would not most universes be high-level ones?
And if that is so; and if you accept anthropic reasoning; and if you believe that there is, a priori, as great a chance of finding yourself (a conscious observer) in any universe as any other; should you not then conclude that you are likely in a high-level universe?
And if you conclude this, are you not then unable to reach any conclusion about whether or not you’re in a simulation?
I think the argument is if every high-level universe simulates many more universes, then you end up with mostly ground-level.
Either the one high-level universe simulates a thousand ground-level universes.
Or the one high-level universe simulates a thousand other high-level universes, each of which simulates a thousand ground-level universes, and now you have 1001 high-level but 1000000 ground-level.
Or the one high-level universes simulates a thousand other high-level universes, each of which simulates a thousand more high-level universes, each of which simulates a thousand ground-level universes, and now you have 1001001 high-level but 1000000000 ground-level.
The only way you could have more high-level is if there were an absolutely ridiculous number of steps.
Mmm, you need several more assumptions for that to work. For one thing, where is this “thousand” number coming from? How many universes of each kind do we think any given universe can simulate? Does this number vary depending on how many levels of simulation deep the simulating universe is? Why do we think the next-to-last-level universes can simulate as many ground-level universes as the previous level universes can simulate high-level universes? Why would anyone simulate a ground-level universe when they can simulate a high-level universe instead? Or can they only simulate ground-level universes? That would be weird, what cause have we to suppose this is a thing that happens? (Maybe you can either simulate either sort of universe, or nothing?) Or what if the tree of universes has most branches having 1000 children (with leaves being ground-level universes), and with maximum depth across them being like a thousand or a million or something, but then there’s one branch which has depth of 3^^^3 (which easily dwarfs the total node count of the rest of the tree)? Or what if there’s a lot of high-level universes that could simulate other universes (of whatever sort), but they just don’t (i.e. their child count is 0)? Etc., etc., etc. …
Well, we know we aren’t in a ground-level universe, because it seems like simulations should be possible (although it’s not like we can really test it at our current tech level).
So the list of possible universes really looks like this:
1. 1 High-level universe
2. 1000 Not-so-high-level universes (able to simulate ground-level but not high-level)
1,000,000 ground-level universes(ruled out by observation).
It seems that if you can simulate even one layer of high-level universes before you reach ground level, there are good odds that we’re in a high-level simulation.
But mightn’t it still seem like simulations are possible in a ground-level universe?
Maybe, especially if the reason ground level universes can’t simulate, is because of their finitude. It might seem that simulations are possible, but before anyone gets a chance, the universe just ends (because the embedding universe ran out of memory, or whatever).
Edit: Daniel Kokotajlo beat me to it.
I think the question is “why should that tree be finite?”
In fact, Turing-universality sort of implies it shouldn’t be finite, a single Turing machine can simulate others, and those can simulate others, and all you incur is slowdown, you don’t keep losing “fidelity” like in this SMBC:
This seems to me like the definitive argument. We may not have any experience simulating universes, but we can simulate plenty of other things. We use processors to simulate other processors, and a processor can simulate itself down to whatever minute detail you require. In practice, a full simulation isn’t even needed except for research into microarchitecture. All the software running on the processor requires is the instruction set, and emulation is sufficient. Nothing in the universe needs to care how instructions get computed beneath the hood. The only thing keeping us from infinitely regressing VMs that can all do exactly the same thing, only slower, is that we eventually run out of space.
Regardless of whether you think a true Turing machine can be conscious (and I have my doubts because it is not parallel) Bostrom’s argument is implicitly time sensitive because the point of simulations within simulations is to establish that the overall amount of time experienced by all consciousnesses should be dominated by simulations.
Establishing that a Turing machine can run an infinite number of simulations within simulations (given an infinite amount of storage and time) does nothing to help Bostrom, as that Turing machine will only be ever executing 1 consciousness at any given time.
If you doubt that time matters, note that Bostrom himself finds it important to emphasize the purported speed at which a simulation could run.
Sure, but eventually the slowdown becomes so great that you can only simulate a universe for one second or so, and why bother doing that? And even if someone would bother, that’s not going to be enough time in-simulation for the simulation to start simulating things. Something has to give.
A Turing machine is an abstraction, they don’t exist in reality. What you actually have are Turing-equivalent machines which come with a set of limitations. These limitations are very relevant insofar we are talking about infinities and simulating whole universes.
Why 1000? Why not 1? Or 0?
Are these simulations happening randomly or by an intelligent entity that chose to run simulations? If first, what determines what simulations will be run? Will the universe just happen to do 1000 simulations? Why assume so?
If the second, why assume that those intelligent beings want to run 1000 simulations? Humans run simulations to learn about our world, but perhaps the aliens have a different motivation. Why assume that they are somewhat like us, when we are a rather arbitrarily evolved species? Perhaps they care not one bit about ‘intelligent’ simulations and just do the same kinds of simulations that we commonly do today (abstracted models without self-aware modeled entities). Why assume that higher level universes are even conductive to life? Perhaps the laws of nature that make them high level, also make it impossible for them to sustain life.
Finally, one cannot just use a priori chances to judge post facto whether we are in a state. For example, the chance to win the lottery is small. Yet if I lose my memory and have to determine whether I won the lottery in the past, the chance of me being a winner is very, very low if I wake up in a small flat and much, much higher if I wake up in a big villa.
Similarly, we know that we can’t be in the lowest level universe, since we can simulate basic universes, yet we don’t even have proof that higher level universes than ours are possible. So perhaps we are at the highest level. Until we find evidence that higher level universes can even exist, it’s pure speculation.
I guess my problem with this picture of a big tree of universes simulating universes is that each sub-universe would be at least 1000 times “weaker” than its parent universe.
Maybe we could make a simulation that supports conscious observers more efficiently than our own by not actually rendering each atom and stuff, but that’s a one-time optimization that would then not be available to the simulators in the simulated universe.
Saying we’re level n, it then seems that the total number of conscious observers in level n+1 would probably be larger than the total number of observers in level n+2, even if there were more universes at level n+2 than at n+1.
Actually, if each high level universe simulates 2 or more lower-level universes you would always have more lower level ones. When each universe simulates N lower ones, the ratio of ground level ‘verses to everyone else will approach N-1:1, so if each universe simulates 1000 others, even an infinite number of steps will not get you below 99.9% of universes being ground-level.
I think there needs to be an added notion of efficiency in this discussion.
Maybe we can simulate a high-level universe, but for us to simulate one femtosecond of the most basic process in the high-level universe requires a galaxy of mass-energy even using the most perfect computer that can be built in our universe.
In contrast, maybe the physics of the high-level universe is so mathematically “dense” in some sense that they can simulate the Milky Way using something the effort/energy/whatever that would be there equivalent of turning on an electric toothbrush.
Or maybe they have no limits. Maybe there is no “energy” in the high-level universe, just consciousness and will which instantaneously enacts goals.
A good counter-example to “it would have to ban the creation of Turing machines” is redstone in Minecraft. Redstone has effects flexible enough to make CPUs out of. But simulating sub-minecraft on that redstone CPU would take such absurdly large amounts of time that there’d be no point. Redstone doesn’t ban Turing machines, yet doesn’t allow for tractable simulations.
Also, if you were doing simulations to learn about human nature or whatever, you would probably want to avoid side-tracking the whole thing with unanswerable philosophical puzzles that don’t exist on your level but show up within the simulation.
Never underestimate the power of nerds with too much free time
I was aware of projects like that. But calling the simulated thing “minecraft” is a bit like calling pong “tennis”. There’s a rather large drop in quality.
If you call Minecraft a simulation of our world, then I would definitely call this map a simulation of Minecraft within Minecraft.
In the spirit of this post, here’s Conway’s game of life simulating a bounded version of itself.
The take away here might be that the length of a chain of universes would probably be limited by space and time rather than arbitrary rules, and would end when there wasn’t enough stuff to simulate lower levels, but I’m not Scott Aaronson so I don’t really know.
I wrote that post, and I was also impressed by the fact that much simpler “universes” (automata that are not Turing complete) can also simulate themselves. See my question on CSTheory stackexchange
Then again, I think discussions like this are empty exercises in circular reasoning.
I think you’re kind of jumping to conclusions when you say it’s without the *ability*. Remember Bostrom’s original trilemma: the first intelligent species goes extinct, rarely ever chooses to run ancestor simulations, or we are probably an ancestor simulation. The tree of simulations could bottom out quite quickly without deeply invasive changes to the laws of physics or conscious agents simply by choices. An ancestor simulation is extremely expensive. Note that humanity has yet to run an ancestor simulation because it would be impossibly expensive to program and execute. The ancestor simulations could simply run through human history up to when it ascends into posthumanity & gains the ability to run ancestor simulations, and then stop; in that case, the trilemma remains true yet the simulated reality runs on exactly the same rules as the base level and there are no weird effects on consciousness necessary. (Indeed, such weird effects would undermine any scientific motivation for running ancestor simulations, and currently science is the main motivation for running extremely realistic large-scale simulations.) Alternately, each simulated universe could be much smaller than the simulator because the simulation creators only want to spend a fraction of their universe’s mass-energy budget on simulations; this would lead to a lot of very small 1 solar system universes, sure, but for the same reason, most conscious agents would find themselves in large universes – universes like ours, perhaps.
This. Sean Carroll raises an interesting point but it doesn’t undermine anything Bostrom said–it’s interesting in that it’s a confusion which, when dissolved, leads to better understanding of the situation.
In my words: Carroll went wrong by thinking that we should expect that we don’t have the *ability* to create simulations, in the sense of not having the right sort of physics, whereas really we should expect that we don’t have the *time* to create simulations because we will be shut off before we can make more than a few. Which is something Bostrom himself said I’m pretty sure. This latter expectation is precisely what we do in fact see, or at least, it’s perfectly consistent with our evidence so far.
A simulation can’t be at the fidelity of the original. You have to give up something.
This is what strikes me as so weird about Bostrom’s idea. Imagine us running a simulation of every single person who ever lived up to now compressed into, say, a year’s time. Let’s even posit that we can use quantum effects to somehow simulate every single atom. I don’t know how we are going to cram all of the matter required in a small enough space to avoid speed of light information transfer problems, but hey let’s ignore that.
That simulated universe won’t be able to harness quantum effects to do its own simulations. And quantum effects shouldn’t actually be apparent.
So not only is that “universe” different than ours in some fundamental ways, but any simulation it runs will need to do without utilizing QM. So it will need to simulate molecules using (simulated) atoms. Etc.
The idea that we should think that the sheer numbers of simulations dwarfs our expectation of being in the real universe falls apart.
I’m willing to accept the possibility that I’m in a simulation, but I’m not willing to accept that it’s probable, and definitely not to a near certainty.
It doesn’t need to be. It just needs to be good enough to fool the people inside. Have you read Permutation City? It has a pretty good walkthrough of how this sort of thing would work. You wouldn’t simulate individual molecules, except when necessary for some reason–for example, if a microscope was looking at them. (How would you know when to increase the fidelity of your simulation? How would you make sure that you don’t do things systematically differently, so people can detect that the molecules aren’t being simulated? Easy: AI & Science. You have plenty of time to test and retest and make your methods perfect.)
But if you want do things like using quantum computing for another simulation inside the simulation, QM has to actually work inside the simulation.
Not only does it have to work, but the entirety of your simulation will slow down to the speed of that QM simulation, because you need everything to be in sync.
Carroll’s argument can be partially salvaged by making the “level” distinction continuous instead of binary: a given universe (real or simulated) has a certain amount of computing capacity. A simulated universe would necessarily have less computing capacity than its parent universe, since only a portion of the parent universe’s resources would be devoted to simulating each child universe, and since the simulation would necessarily have some amount of overhead.
Our universe certainly appears to have an enormous amount of computing capacity, but the key word there is “appears”. Perhaps only the Earth (or perhaps the entire solar system) is simulated in full detail, with the rest of the universe simulated only well enough to seem convincing given our ability to observe it. The size of the “bubble” to which the scope of technological civilization is to be confined over the long term serves as an indication of how much resources are available to simulate our universe convincingly in perpetuity.
This is all highly relative, though. We’ve already figured out that we can theoretically squeeze lots of computation out of single atoms by leveraging quantum physics (“quantum computing”). Imagine if our universe had building blocks that were even more amenable to such manipulations. Imagine a universe where it’s just a matter of subjecting “atoms” to the right fields, and you could get them to return answers as if they were equivalent to vast supercomputers in our universe. In a mathematical universe, it’s at least plausible that such universes exist.
So maybe it’s dead-easy for a high-level universe to simulate our whole universe. It only looks hard to simulate our universe because our imaginations are constrained by the materials at our disposal.
Thinking about this a bit further: the Fermi Paradox can be read as evidence in favor of the simulation argument. The paradox is that there’s no obvious barrier to multiple technological civilizations arising and developing into starfaring civilizations, and that barring such a barrier any starfaring civilization would become a galactic civilization in short order (cosmically speaking). So why don’t we see any galactic civilizations? The standard resolutions are:
1. There are non-obvious barriers to the development from technological to starfaring, or from starfaring to galactic. The most common proposed barriers are self-annihilation (nuclear war, engineered plague, etc), complacency (civilizations wind up choosing holodecks or wireheading over boldly-going), or resource exhaustion (e.g running out of fossil fuels and fissionable isotopes before developing fusion or solar power than can be scaled to civilization-sustaining levels).
2. There is a galactic civilization, but it’s avoiding contacting us and its technology doesn’t leak energy in a way that we can detect.
3. We’re the first. Someone had to be.
The simulation hypothesis would also resolve the paradox: there’s no galactic civilization because the simulators can only afford to simulate a single-world civilization. It seems plausible that there would be a lot more simulators with the resources and inclination to simulate a single-world civilization than those willing and able to simulate a galaxy-spanning empire.
If resources are an issue, why bother creating other galaxies? We’re probably not going to reach the vast majority of them. They seem superfluous.
It’s not that computationally expensive to model other galaxies at a level of detail needed to fool astronomers for the foreseeable future, not compared the the computational cost of modelling human civilization at full resolution.
The spooky anthropicish thought: some combination of quasars, pulsars, black holes, and trinary stars are not real, but instead an attempt to model the galactic civilization’s visible artifacts as naturally occurring phenomenon.
So, what, if we try to venture too far away from the Earth we’ll hit the limits of high-fidelity simulation? Maybe the simulation will try to contain us, with a transparent shell of some kind; if we’re going at a good enough clip, and the shell’s inside the high-fidelity simulation area, maybe we could crack it!
Sounds like a good idea for a story. You’d have to throw in a bunch of nerds and a cute kid, though. Also whale puns.
Already been done (as an aside) in Philip Jose Farmer’s World of Tiers series. IIRC, there’s a throwaway line about space probes bouncing off the walls of the universe about 50 light years out.
This is the exact premise of Scott’s Unsong, but seems like you knew that?
There is no empirically discernable difference between an inability to simulate conscious observers (true ground level) and the inability to observe simulated conscious observers as such (apparent ground level). When our universe attempts to probe its groundedness by simulating lower universes, we may be unable to observe the consciousness present in the system, and thus not know the hedonic cost of terminating the simulation. Assuming our universe contains typical sub-universe simulating agents, how quickly we terminate apparently ground-level simulations can help bound the expected lifetime of our universe.
Doesn’t attempting to reason about “typical universes” or one or another type of universe being “more common” or “less common” fall into the same fallacies that drive things like the doomsday argument?
There doesn’t seem to be any “measure” function more natural than any other measure function to compare how you should be counting entirely causally unconnected things like universes set up different ways, or competing histories. You have a bag with red marbles and blue marbles. You draw a marble: What is the probability that the marble you drew is red? You don’t know, sans any other information. You *could* in the absence of any other information decide to treat the probability as 50/50, but that is an arbitrary choice.
It gets worse: Our universe is probably continuous (yes yes, you can reference quantum mechanics (incorrectly), but QM depends on a great many things that need continuity to work: Rotational symmetry, Lorenz symmetry, the continuous nature of amplitudes, etc). The set of universes set up the way ours is has the character of continuum infinity (or worse), so you don’t get any nice natural measure from “discreteness” even ignoring arbitrariness of degeneracy there.
It seems that anything depending on assigning “probability” (Bayesian, confidence type) to hypothetical events that you don’t have any experience of (or hypothetical unvierses, etc) cannot really give you any information about their “typical”-ness.
Strong anthropic arguments work. Weak anthropic arguments fail for this reason.
Supposing there is no natural meausure on all possible universes, do we have to conclude that there is no actual measure? It would be nice to be able to derive everything from first principals, but maybe the measue itself is something to be discovered.
Maybe there is, but the only way you are going to discover it (empirically) is to make multiple draws from the “space of possible universes”, preserving your experience each time. 😛 So, convert to Zen Buddhism? (Now that I think about it, even then, you would only really be mapping out the measure of “the space of possible universes where austior’s memory accumulates.”)
In QM, if you take the many worlds interpretation seriously (and I take it more seriously than a lot of other things that get said), then the measure of “realness” of any given quantum state is amplitude^2. Why? Dunno. It’s not something that can be derived *from* quantum physics as it is set up now. It’s an empirical observation from at least one way that we (apparently) do draw from the “space of amplitude blobs of a QM system”.
Not necessarily. There could be a large number of independent facts about the laws of physics, each of which provides a small amount of evidence toward determining the actual measure. Or, there could be only a handful of plausible candidates for a measure, with our universe being overwhelmingly more typical in just one of them.
“QM depends on a great many things that need continuity to work: Rotational symmetry, Lorenz symmetry, the continuous nature of amplitudes, etc).”
Sez who? Every calculation we make is in fact a discrete one. This is obvious when you know how digital computers work (they have a minimum possible floating point value, of which every other number they make is composed), but even on my old slide rule I could only read off up to three digits – which is to say, measurement uncertainty converts every analog system to digital.
This is my opinion: calculus with its infinitesimal continuity is a very useful approximation to discrete systems with very fine increments, but whether there are in fact any physical systems with infinite continuity is unproven. (Note: in discrete systems differential equations become “finite-difference” equations, and the solution methods and forms of solutions are similar.)
Thus in engineering we calculate stress and strain in a metal girder with differential equations and assumed homogeneous properties, despite the fact that the girder is composed of discrete atoms, and in fact of larger “grains” which are conglomerates of different alloying elements. These days in fact we often use “finite-element” models which are completely discrete in both their composition and calculations, and give us the same answers as our differential equations.
To drag this back to the main topic, if in fact this universe has infinite continuity anywhere, then it obviously cannot be fully simulated on any digital computer. (Even a quantum one, with a finite number of qubits.)
In any case, I don’t see any reason why any higher civilization would chose to try to do such a detailed simulation as our universe would require. Yes, we do simulations, such as simulating the collision of black holes, but we don’t waste resources by populating them with ants. For entertainment? Don’t we leave out most of the tedium of daily lives in our simulated entertainments?
I know, the minds of higher civilizations might be incomprehensible to us – in which case, the whole simulation issue is pointless to speculate on.
Until someone produces actual empirical evidence which can *only* be explained by the simulation argument, I fail to see why I should consider it anything more than another seminal deposit in the long, sad history of philosophical masturbation.
Have you heard of the underdetermination of theory by data? There are infinitely many hypotheses which explain any given set of evidence. Your criterion for what counts as good science–that only one theory explain the data–makes science impossible!
Seriously though, once you think more about how science works you’ll realize that the simulation argument is legit. It’s just basic probability calculations plus an indifference prior over observers-observing-what-I-am-observing.
(By legit I don’t mean “is correct,” since there are decent objections to it. What I mean is, “is a reasonable and scientific thing to think.” It’s not something to be dismissed so easily as you do.)
The problem is that the generic simulation argument has zero predictive power, which makes it rather worthless. It’s actually very similar to the issue with using God in a theory. Depending on how you define God’s motivations/power/etc, you can explain everything and it’s opposite. As there is no reason to expect God to be a certain way, you just end up projecting observations on God, which is worthless.
As there is no reason to think that being in a simulation would be in any way distinguishable to us from not being in one, these abstract simulation theories provide us with no specific predictions that we can expect and/or use to our benefit.
The value of science is telling us what cannot happen, rather than what can. A theory that doesn’t limit our universe in any way, is not scientifically valuable.
I wouldn’t say it makes no predictions, not quite. It does suggest that we will never identify a computing technology that will allow us to run detailed ancestor simulations, and that our physics doesn’t enable architectures more powerful than Turing machines (oracles, for example). That last one might seem like a big jump, but Turing completeness shows up all over the place unintentionally and would be hard to avoid, whereas we know, for example, that no oracle can solve its own halting problem, so it seems unlikely any universe could simulate its own physics in detail with useful amounts of resources, and if we’re at the bottom then Turing machines should be all we have left.
I don’t see why we would necessarily be able to do so in a non-simulated universe; nor do I see why we wouldn’t be able to do so in a simulated universe. The parent universe may simply have much more computing power (perhaps even infinite) and thus able to dedicate sufficient resource to our simulation to allow ancestor simulations. Then our simulations would be awesome to our eyes, but very shitty compared to the parent universe simulations. But we would never know.
…So no predictive power here.
A Turing machine requires infinite resources, which doesn’t seem available in our universe, given our current knowledge. So my working theory is that we can only produce machines less powerful than Turing machines.
However, I don’t see why we couldn’t produce basic ancestor simulations with machines slightly less powerful than a very good Turing complete machine. Especially as there is no reason why we would need to simulate the depth, complexity or size of our current universe. We could simplify it a lot and still have self-aware sims doing their thing.
And again, our universe may be a very basic and simplified ancestor simulation. Our assessment that this universe is rather big and complex is based on our subjective viewpoint. There is no reason to assume that this assessment would be shared by an entity simulation us (who may be extremely more intelligent than us).
I would definitely hope that the entities simulation us would be more intelligent and rational, given how dumb and irrational mankind is.
Well, from a Bayesian perspective you simply need enough evidence which is better explained by the simulation argument than any other argument.
What about if we started making ancestor simulations, would that count?
Not unless we establish that each ancestor in the simulation can reasonably be considered sentient.
Right, presumably by this you mean “has conscious experience”?
The great thing about masturbation is that it’s fun. That includes philosophical masturbation.
I will never understand people who decry others having fun. I include in this category those who decry both physical and philosophical masturbation.
While I like the mental gymnastics, something about this argument seems off to me, but I can’t put my finger on it. Gotta mull it over I guess.
Ok, I was wondering if someone might help me with a really basic question on the simulation argument, so basic that I’m near positive I’m missing something, so this is a sincere request for help and not a challenge to prove this wrong. It’s vaguely like the Sean Carroll argument but even more basic.
The simulation argument presumably rests on certain facts about the empirical world that make it plausible that civilizations could be simulated. Doesn’t matter what those facts are, but just call them the set F. (I don’t know how you’re supposed to denote sets.) Now imagine that we are, in fact, in a simulation. Doesn’t it follow that we no longer have any basis to know what the universe is like in the world of the simulators? I assume there’s no a priori reason to think that their reality has to match our simulated reality. But if that’s right, then we have no basis to believe that the set of facts F holds in the relevant universe, that is, the universe of the simulators. The conclusion of the argument (we are in a simulation) eviscerates the empirical basis for the simulation argument in the first place. Which means the argument can’t provide us with any rational ground to believe we are in a simulation.
Why isn’t that a good argument against the simulation?
This is a good question which I have been thinking about too. I don’t have a full answer to it, but I do have two partial answers:
Answer One: Consider this modified version of the argument:
(1) Either the actual laws of physics are more or less as they appear to us, or they are not.
(2) If they are not, then we are probably in a simulation. (Alternatively, we could be given hallucinations by an evil demon, or there could be a vast conspiracy to sabotage scientific measuring instruments, etc. but all of those are less plausible.)
(3) If they are, then [insert simulation argument here] so we are probably in a simulation.
Answer Two: Occam’s Razor tells us to prefer simpler theories. If we are in a simulation, then this applies to the laws of physics of the simulators: theories according to which the simulators have simpler laws of physics are more plausible than theories according to which the simulators have more complex laws of physics. OK. So now we ask: are there any simpler laws of physics, capable of supporting intelligent life, than the ones we know of?
Maybe, but if so I’m not aware of them. (Well, there are some trivial ones, but they get us into some anthropic issues so I think we can ignore them.) So maybe Occam’s Razor is all we need here.
“Less plausible” –at least the demon one — makes certain assumptions about the nature of real reality that do not strike me as plausible.
Fair enough: If you think the evil demon hypothesis is more plausible than the simulation hypothesis, then the simulation argument merely establishes that EITHER the evil demon hypothesis OR the simulation hypothesis is true.
Hey, could be something even weird than that. The whole point is that if this is just a simulation, it would be very hard for us to work out what reality really is like. Make Plato’s Cave look like a piece of cake.
Thanks, Daniel. These responses are very interesting and helpful. I’m especially going to need to give some thought to the first one. Thanks again.
I think CTs point is on the mark; it was among the first things that came to my mind after the initial incredulous stare wrt simulation arguments. I think it is fairly devastating but I am also puzzled that I have not seen it adressed by friends of simulations.
And I am afraid your modification does not help much. Your point (3) glosses over the crucial point: the simulation argument uses actual physical theories and specific facts (CT’s “F”) to estimate the probabilities, e.g. age of the universe, possible density of “computronium” etc., frequency of civilizations becoming able to make simulations, not only laws but also historical facts about our universe. But all these are bunk if we actually are in a simulation. The real (parent) universe could be completely different. We simply don’t know, we only know that we should bracket Everything we thought we’d know (except some pure mathematics and logics, but maybe the simulators tweaked our minds here as well).
We could and should also not be sure not to experience strange irregularities, “flaws in the matrix”. So we should drop the assumption of generality and universality of science. If we are at the whim of the simulators maybe there really were/are demons (i.e. patterns of experience fitting demons of lore) in the middle ages or today in some voodoo cult because the alien simulator has fun with that.
We are simply back at Descartes’ problem with the malign spirit, only cast in in contemporary language. And the argument is actually worse then Descartes’, because Descartes does not use input from physics to argue for the deceiving demon. Descartes runs into threatening circularity an iteration later when he uses God’s goodness as a warrant that our “clear and distinct” perceptions are usually not deceptive. (It’s circular because in one of his arguments for God Descartes uses that we have a clear+distinct idea of something like God. – He might have been aware of this because he gives more arguments for God’s existence.) But we cannot assume such goodness from the simulators, so this road is closed for modern simulation arguments.
As for your second option: Occam’s razor can hardly be invoked here. It’s too sharp. Because it is obviously a far simpler hypothesis to assume that there is at most one universe, viz. the one we actually experience, and not a complex chain or nests of simulations.
The only way out to me seems to assume that the set F of facts from our physics (or maybe “physics” as they might be just figments of a simulation) we use to estimate the probabilities (or the more precise values of them) for the simulation arguments is not specific enough to matter. So roughly, that a huge class of possible universes will be such as to admit simulation and then there will be simulation with such and such probability.
I don’t find this plausible either and it is not what is done in the version of Bostrom’s argument I have seen (but I certainly have not kept up to date in this debate).
Johannes, I think you didn’t read my argument correctly.
“Your point (3) glosses over the crucial point: the simulation argument uses actual physical theories and specific facts (CT’s “F”) to estimate the probabilities, e.g. age of the universe, possible density of “computronium” etc., frequency of civilizations becoming able to make simulations, not only laws but also historical facts about our universe. But all these are bunk if we actually are in a simulation. The real (parent) universe could be completely different.”
No, my premise 3 says “IF the actual laws of physics are more or less as they appear to us, THEN [insert simulation argument.]”
It’s not an argument for or against simulation. It’s an argument against giving a shit.
The Simulation Question suffers the same problem as Deism: maybe God exists; maybe He doesn’t. We’ll never know (from the inside), since a universe with a deist god is indistinguishable from a universe without a god. Therefore, it’s not worth worrying about.
The only scenario where it’s productive to ponder this is: if the simulation leaks (which is the premise of Unsong, The Matrix, The Thirteenth Floor, etc); and
Godhigher simulations follow laws of physics that we can reason about (i.e. Set F holds for the relevant universe).
For what it’s worth, as long as we’re going with these expected frequency arguments, I’d expect far more leaky simulations than perfect simulations.
To me it seems more plausible that most universes can simulate most other universes using turing machines at various rates of efficiency. The more computing power you seem to have access to either for physics or to simulate other physics, the more you can assume that you’re not a simulation. If we’re in a simulation, its a very dedicated one probably focused on simulating us and our solar system, or an insanely decadent one where some civilization has enough computing power lying around to simulate us as a random planet in their equivalent of Master of Orion.
Or the higher level universe has completely different laws of nature making their simulations cost almost or actually nothing. Or perhaps the laws of nature that we have are universal and the level of detail that we see requires so much computing power that it’s completely impractical in practice, so no one bothers.
Err, if the argument is over whether or not the universe is simulated at all, why couldn’t we be in one of the levels between the “real”, original universe and the minimal, “ground-level” simulation?
IE, we are being simulated, but we can still build simpler simulations?
Obviously whether or not we are in the simplest possible universe is a different argument, (probably that ought to be “whether or not we are in the simplest possible useful universe.”
If you assume that simulations are created for a reason, then maybe the smallest simulation that allows consciousness is too simplified from the next level to be particularly useful, meaning that there is a band of uselessness between where we are on the simulation chain and where consciousness is impossible to simulate.
The answer is not “why couldn’t we be” in an intermediate civilization. It’s just that if each civilization runs >1 simulations, the the probability of us being in the ground level is significantly greater than in all the other levels combined.
This actually isn’t true.
If the simulation tree is a binary tree (i.e. every universe that is not a ground-level runs exactly two universe sims), then there is only 1 less non-ground-level sims than ground-level sims. It isn’t significantly lower.
If the simulation tree is trinary (or 4-ary, 5-ary, etc) , then a greater percentage of the nodes on the tree will be leaves, and perhaps then this type of reasoning is sound. But in the first place there is no motivated reason to assume that the simulation tree is strictly n-ary. I’d imagine that as you descend the tree, as universes have progressively less computational power, the number of children sims they spawn will decrease, leading ultimately to sims which can only spawn one child in the ground-level. In that situation, then the majority of sims will certainly not be in the ground-level.
The subjective experience of denizens of ground level universes resemble those of the denizens of Dwarf Fortress.
But not quite, since even they can build Turing Machines.
>The subjective experience of denizens of ground level universes resemble those of the denizens of Dwarf Fortress.
They have Turing machines, but no consciousness. It’s Disneyland with no children.
They have a characteristic with the name “soul” that actually has a causal effect on their behavior. That’s more consciousness than I know I have.
How do you determine what strings of numbers count as simulations of discrete/computable universes with conscious observers, and what count as noise, or gibberish, or gigantic strings of the numeral 7? On a sufficiently perversely designed Turing machine “77777777…..” could develop extremely complex behavior. Every set of strings of finite length can be turned into every other set of strings of finite length by an appropriate transformation or cypher. The meaning of a given string (and how “complex” it is) requires not just the string, but also some kind of definition of the reader and how it is read.
Minecraft worlds, for example, when read by humans looking at the client look nigh infinite in extent and very complex. But the whole thing was grown from a seed string with less than a kilobyte by a deterministic algorithm.
So, you really have two readers here (bases of string evaluation): The person inside the simulation, to whom the world means one thing, and the person outside the simulation interpreting it, for which the simulation could be an entirely different thing (or any arbitrary number of different things depending on how he decides to dial his decoder ring that day). It is not obvious to me why one necessarily relates to the other.
Let us suppose that there exists a sequence of strings corresponding to the world history of a discrete/computable universe (though it pains me to arbitrarily privilege one time axis like that, so be it..).
Let’s suppose that this sequence of strings is very popular. Some arbitrary number of “higher-level” universes N contain the string, or an equivalent. (Really, an infinite number of higher level universes, since there are an infinite number of larger strings that contain that string as a substring).
Is there N times the experience of life in the universe defined by that string, or only 1 times? (Or some arbitrary number not connected with N or 1?)
If there is a finite regress of finite-memory universes simulating smaller universes, then there is also an infinite regress of larger-memory universes simulating the high-level-universes. Is counting the “frequency” with which a given univserse should appear getting ridiculous yet? I would think the exact opposite argument could be made about the “number” of universes of a given complexity.
This is the exact question that gets me every time simulation arguments come up.
A related one is this: if a pattern of electrons going through wires is as good as flesh, blood, and neurons because they contain the same information, why isn’t the mere source code for a simulation also just as good? In other words, if you’ve got the code to simulate a universe, why do you need to press play for it to “count?” (This assumes the sim universe is deterministic of course. Otherwise the code on its own wouldn’t contain a complete specification of the sim universe.) For such “unimplemented” universes there are no resource requirements beyond the memory to store the code itself.
Now, going a step further, it is in principle possible to code a sim universe which has the peculiar property that at some point in the simulation the source code for that simulation is written down. Then, by simply writing down this source code once, you’ve created infinitely many nested universes. Of course now that you have infinitely many of these universes you can ignore the lack of measure, wave our hand, and proclaim that we almost certainly exist in one of those special simulated universes.
Indeed. If the universe in question exists as a logically complete consistent system, why does it even need to be instantiated somewhere else (instead of off in “mathematical-space”) for it to count? I somehow doubt that the reality of the universe is dependent on whether or not it is represented in any way in another causally unconnected universe (otherwise you get a different infinite regress problem with what gets to confer “realness” on something).
If the universe is causally complete and consistent (nevermind that we’d never know this for sure), and you ask whether or not it’s a simulation, then the answer is mu. There is no way to tell (which is equivalent to saying there are no consequences distinguishing the two cases). And the existence of a causally closed universe is unconnected with it’s representation ‘elsewhere’.
If the universe is not causally complete, but it interacts in some way with a completely different universe, *then* you would A) have evidence for a simulation hypothesis or other sort of interaction, and B) need a complete model including the other universe to explain your experience. (And C) have some means of interacting bidirectionally with the other universe.) (And the wider universe would then be ‘the universe’: The complete set of entities on which the range of your experience depends)
IMO: Until we find evidence, ‘simulating universes’ are unnecessary. If we do find evidence, ‘simulating universes’ would be testable. The sort of answer our experience with every other phenomenon so far leads us to expect.
A lot of these “plausibility arguments” sans evidence seem to me to be a misuse of statistical reasoning.
What do you mean by “causally complete”? Same for “Causally closed”? I suspect these terms hide important differences between inputs and outputs.
Physical computers aren’t technically Turing machines, because a Turing machine has infinite storage. An actual contemporary digital computer is strictly speaking a finite-state machine. I wrote “technically” here, but I think it’s the kind of detail that’s going to matter a lot when making arguments about simulating whole universes. I’m not so sure that constructing a robust computer with infinite storage is possible in this universe even in principle, given relativity and universe expansion. Brains seem capable of some pretty impressive feats, but it doesn’t look like they have infinite storage capacity either. So for all we know, we might already live in a universe that banishes Turing machines from existence. Did you want to prohibit finite-state machines from existing in the simulated universe too? I’m not sure that it’s even logically possible. Unless we’re speaking about a kind of trivial toy “universe” that consists of, say, one particle stuck in zero-dimensional space.
Plus, whether a universe with physics like ours can be simulated on a Turing machine (or an approximation thereof) is not a definitely settled matter either. But assuming it does, the simulation is most likely going to take a lot of overhead, which means time is necessarily going to tick slower in the simulated universe. In itself, that’s not that much of a problem, but then there’s the whole waterfall argument: a physical system could be said to realise a simulated universe right now, but in a manner that makes it practically inaccessible to us, because mapping the simulation back to something we can perceive is too computationally expensive, or not possible at all. So imagine that the universe we live in is a simulation; was it created deliberately inside an alien computing machine, or does it exist as a “waterfall”, mapped to events in the “outer” universe in a completely arbitrary manner? From the “inside” there’s no way to tell what realises any given computation.
Problems like this make me think that it isn’t even meaningful to speak of any “outer” universe at all. The “simulation hypothesis” is a meaningless question.
The size of the largest possible finite-state machine within a universe is necessarily no smaller than the size of any simulation of a universe that can be created within that universe.
I suspect that it is strictly larger, but only ‘not smaller’ is trivial.
This is an interesting point I haven’t heard before. If true Turing machines are impossible in our universe, then what might a universe looked like that *did* allow them? Would the laws of physics in such a world be something that in principle a Turing machine could compute, or would it require something higher on the relevant complexity heirarchy, like a halting oracle? If something like that applies and generalizes, maybe we end up with something like the Structure (https://qntm.org/structure, a bit hard to read as a story but interesting).
Edit to add: On reflection, given the existence of universal Turing machines I suspect you could create a universe that allowed Turing machines that could itself have physics that don’t require anything more than a Turing machine. Not sure whether it is likely, and if the machine you’re running on only has one tape it may be too expensive to do often.
“Ground level” universes wouldn’t have to be incapable of running simulations, they would just have to be intelligently limited so that whenever anyone tested simulation capability, it seemed to be there, but whenever we aren’t looking, the sims are replaced by cheap approximations.
EDIT: The same applies to the physics of the universe. It would be approximated most of the time, and fully implemented only when it was needed (and even then, maybe it would just be a finer grained approximation – good enough to be indistinguishable from the real thing to the agents in the simulation)
Basically, Sean Carroll is assuming incorrectly that simulated universes would be physically accurate, when in reality they would be a bunch of approximations like a computer game.
Exactly, and very near 100% of beings that evolved with limited computational resources would be trying hard not to waste them.
They might even consist entirely of very bad approximations indeed. To Pacman, Pacman physics would presumably appear complete and accurate.
The agents who inhabit Pacman are not intelligent in a meaningful sense, so it doesn’t matter.
Agents that can think for themselves would notice problems.
The Con of the Gaps.
Also the use of “ground level” here is the opposite of the usual terminology, but whatever.
Consciousness is a problem for the philosophers rather than for the physicists, which is not supposed to imply that it is a lesser problem in any way. However, I find it very unlikely that human behaviour, including the firing pattern of neurons that we interpret as consciousness or thoughts thereabout, is governed by anything other than quantum physics or whatever it will be replaced with in the future.
If the universe simulators of the future populate their simulation with agents of human-level intelligence and also hardcode them to pretend that they’re conscious, what does it matter whether or not they “really” are conscious? Who could prove them wrong from inside the simulation?
Simulation arguments are an extended version of solipsism, without any predicitive power. If we were actually living in a simulation, then no arguments or conclusions whatsoever, and in particular those on whether or not we live in a simulation, can have any meaning. We would always have to ask, and could never know, whether any of our conclusions would also make sense in the top level of reality. Whatever positive or negative proof we find for either side of an argument, we could never be sure that our whole universe hasn’t been set up by our simulator gods to mislead us either way, because it’s obviously impossible and nonsensical to try and look from outside the simulation to determine whether we are living in a simulation. It’s like asking, what is outside of our universe? What is inside of a black hole? What happened before the Big Bang? While those questions may be entertaining to ponder, we will never know.
Therefore it is unprovable whether or not we live in a simulation. Therefore, Occam’s Razor says that the only rational assumption is that we do NOT live in a simulation.
“Simulation arguments are an extended version of solipsism, without any predictive power. ” Not true if we assume that you are more likely to be simulated the more interesting your life is. If we are in a simulation, and if you Hackworth are being simulated and are not just an NPC then I predict that your life will turn out to be more important to history than one would otherwise think.
How is “simulated” Hackworth different from “just NPC” Hackworth? Who even makes that distinction, and on what basis? Again, my whole argument is that if we live in a simulation, then some outside program effectively controls our every perception, thought, and action, and we would have no way of differentiating any of it from the top-level reality.
The parallel to solipsism I want to draw with that is that solipsism is disproven the same way as the simulation argument. Solipsism says that you are the universe, nothing really, physically exists except you. Nobody can control the outside world by the power of thought alone, therefore even if solipsism were the truth, it would be indistinguishable from non-solipsism, and therefore solipsism must either be equivalent to the outside reality, or simply false.
NPC Hackworth isn’t conscious and runs on a relatively small amount of processing power.
And how would you, inhabiting the same simulation as I, know the difference?
The argument I first heard about simulation was this: if you are simulating the universe accurately enough, that universe will eventually make its own simulation (because it’s simulating a universe in which a simulation is made), and so on forever. Thus, there is 1 real universe and |N| fake ones, so the probability we are in the real universe is 0.
Unless I’m missing something, this argument lies in obvious contrast to the claim that simulated universes can’t simulate other universes.
This argument assumes that you have infinite (actually infinite, not near-infinite) computing power. If you don’t, than in any given universe you can only simulate a universe smaller than that one, and the recursion must end.
Can I get a definition of “simulation” here? Because from what I’m getting out of the description of ground-level, then aren’t stories ground-level simulation universes?
Only stories not advanced enough to contain Turing Machines.
Well, it depends upon the instantiation media of the story told, right? It seems that stories generated from sound (orally) and print have no means of characters gaining consciousness within and breaking the fourth wall.
Recordings of a story (radio plays, TV, film, scan of print) meet the same requirement, as despite computer hardware processing the sound for some playback equipment, the story itself still cannot gain consciousness.
Procedurally generated text stories, on the other hand…hrm. The latter has some wiggle room, in that it could be argued that the story, once posted, is independent of the generation process and no longer is connected to the Turing Machine of the equipment that generated it. In this case, visual media generated from CGI still becomes a ground-level universe once it is rendered into a “recorded format.” A Pixar movie, or a cutscene, or action CGI SFX may be considered high-level, when it’s still within the animation program, but not once it’s “cut loose.”
A TAS input program being run on the console is high level, but the youtube upload of the screencapture video is ground-level.
And, again, can I get a definition of simulation here? Is a theater production considered one?
Actually, since a work of fiction is a simulation of the universe, and it’s possible for a work of fiction to include an infinite UTM, and it’s possible to write a work of fiction containing a UTM in a universe where UTM is not possible, I withdraw my claim.
It’s not possible to perfectly simulate a world more complex than yours, but detail of simulation was never implied to be perfect.
Eh. Assume we’re in a simulation. What reason do we have to believe that the universe simulating us is sufficiently similar to our universe that the concept of limited simulation even makes sense? Maybe our universe is limited, not because of system constraints, but to see what happens in a universe with different or stronger constraints than the real universe. Or maybe we were just written by a lazy programmer, and better-written universes don’t have the limitations ours does.
Likewise, assume we’re not in a simulation. What reason do we have to believe that an un-simulated universe should have sufficient processing power to simulate a consciousness-enabled universe?
(In general I reject the notion that our existence can be used as evidence of anything.)
It’s even possible that there are no constraints in the real universe at all and that our laws of nature are 100% set by the entities running the simulation. Or it’s possible that we live in the most generous laws of nature that are possible and yet we will never reasonably be able to do a simulation of the complexity required for a simulated universe, so all universes are ‘flat’.
Eternal inflation would cut against Carroll’s argument. If eternal inflation is correct for the original multiverse then new universes are being created at a super-exponentially increasing rate. Even if every universe that gives rise to an industrialized society also gives rise to billions of simulated industrialized societies, at any given time-slice across the multiverse (yes, this is difficult to define consistent with relativity) there are vastly, vastly, vastly more “real” universes than simulated universes because during the time gap between when a real industrialized society develops and when it creates simulated universes a huge number of new “real” universes will be created by eternal inflation.
“Other things that came to mind when I was writing Chapter 34 of Unsong: The Post”
Can someone give me a nutshell-sized argument for why I should take the hypothesis “we are simulated” any more seriously than the existence of Russell’s Teapot?
It’s a great match to a variety of religious traditions : -/
We are too special. It appears that we live in an empty universe. If our high technology civilization continues we will eventually go on to colonize trillions of star systems. Because of the (apparent) impossibility of traveling faster than the speed of light, once we spread through our part of the galaxy we will be beyond any local disaster and will survive to the end of the universe, an unimaginably long time from now. But right now you appear to live in that very brief period of time in which mankind could destroy itself, making you fantastically important. Something isn’t right. One solution is that in the future this brief but important period will be simulated numerous times.
The impossibility of traveling faster than the speed of light is one of the many things that will prevent us from ever colonizing “trillions” of star systems. Keep in mind that, so far, we have colonized exactly zero star systems (yes, this does include our own star), and it doesn’t look like we’ll even leave our own planet any time soon (other than for a brief jaunt on the Moon, which we can’t even reach anymore).
Trillions requires that we get to at least one other galaxy, probably two; Andromeda is about 2.5 million light years away. I was under the impression that it was likely we could reach any galaxy in the local group before cosmological expansion cuts them off from us, but I’m not a cosmologist.
“It appears that we live in an empty universe” places far too much weight on our meager observational ability, which has itself existed for only an eyeblink in universal history. If many other civilizations exist beyond our current observational threshold, does the whole thing fall apart?
But anyway, so what? Maybe we are special people in a special time. Somebody has to be, if special times exist. Existing in this eyeblink is profoundly unlikely, but so is existing in the next eyeblink. Or the one after that. It’s profoundly unlikely that we’re the first advanced civilization. But it’s profoundly unlikely that we’d be the 1,312,571,309th too.
“Unlikely things happen, therefore someone who cares about us must be controlling it” is precisely your standard religious origin, and what I so far don’t get is why “simulation” is any more likely than your standard creationist theism. Only practical difference seems to be whether you can get invited to the cool parties.
The probability of me being me and not anyone else is so absurdly small that I must therefore conclude I do not exist.
(Suck it Descartes)
Thread winner, let’s pack it up folks.
I think you just summed up the problem with the simulationist argument, the Carter Catastrophe, et cetera more succinctly than I’ve ever seen. Nicely done.
When you describe it that way, it reminds me of the “Carter Catastrophe” (brief description: it’s far more likely that some randomly selected cohort of humans will be in the middle of all humans to ever live than at the beginning, therefore it’s likely that we are in the middle instead of the beginning, therefore human civilization will probably end in a few hundred years instead of lasting forever.)
It feels like the simulationist hypothesis steals the same sort of hard-to-articulate bases that the Carter Catastrophe does.
Stephen Baxter? Loved that series.
By that logic, don’t we have 50,000 more years to go or so?
I mean… you should go read the actual simulation argument (by Bostrom), it doesn’t fit in a nutshell but it’s clear and precise. If you read it and still don’t agree/understand, message me (or respond here) and I’ll explain it more and/or defend it against your objections and/or help you publish your objections (if they are good.)
Let me point out that the simulation hypothesis is not a new idea at all. You probably know it under the name of creationism.
Is the logical structure of the creationist hypothesis the same as the logical structure of the simulation hypothesis? If not, why say this?
(Specifically, it looks to me like the creationist hypothesis is “it’s necessary that something external made this, because otherwise how could it exist?” whereas the simulationist hypothesis is “it looks easy to make self-contained systems that one is external to, so shouldn’t there be lots of them?”.)
“shouldn’t there be lots of them?” is the simulationist argument (from Chalmers, et al). The simulationist hypothesis is that we live in a simulation.
The logical structure seems similar to me: we were created by someone(s) from outside our world, someone(s) who are not subject to our laws of nature and with unimaginable power over our entire world. In fact, within the simulation hypothesis I don’t see why you wouldn’t call the simulation-maker “God”.
Creationism has it that the apparent world is fully real, simulationist doesn’t.
Define “real” : -/
In Abrahamic religions there is a vast chasm between created and uncreated. One easily can call only the uncreated “real” and, in fact, this is a recurring theme in many religions.
… which would reconcile the theory of Creation c. 4004 BC with the evidence for an older earth. In fact, it’d actually also reconcile the evidence that young-earth Creationists point to as well: the simulators did a sloppy job on the backstory!
(Epistemic status: tongue-in-cheek.)
And also why there isn’t only one deity/pantheon for our entire world – there are different parties playing in the simulation and they have different aims (some are doing research, some are historical re-enactors, some are role-playing, some are only here to see what happens if you turn a bunch of sims loose, etc.)
You joke, but the Omphalos hypothesis IS a thing and some creationist Jewish theologians apparently still take it seriously.
Which doesn’t mean we should, but that it’s not actually a strawman – some creationists REALLY are willing to believe God create the world to look older than it is, it’s not just something the ‘evil atheists’ made up to make them look bad.
Though this is sometimes parodied as ‘The Devil put dinosaur bones in the ground to deceive us!’.
I have no idea how many honestly believe this – I can’t find support for people holding this POV currently, but yet the Devil, rather than God. deceiving people seems more fitting with most theology…
(maybe you already knew about it, but someone else here might not)
Yep, already knew it. How elegantly unfalsifiable; I can only talk about it with my tongue in cheek.
I don’t really buy your argument.
Imagine there is exactly one universe that simulates exactly another universe at 1% speed, and this is all that exists. Then, it seems to me that I can be more-or-less 99% confidant that I’m in the former universe, because I have only 0.01 times as much thinking-time in the latter.
Thought of in this way, there are no hard-and-fast ground universes. Instead (assuming infinite memory), you can have infinite universes, each of which contains slightly slower simulations.
An Argument from Economics, That Most Conscious Entities Are Probably In a NON-Simulated Universe:
So, I took VirtualBox and installed it on a real physical computer, aka host machine. Using VirtualBox I created some simulated computers (‘virtual machines’) and even some virtual machines nested inside other virtual machines. I run go-playing programs inside and outside the VMs, and pit them against each other in go tournaments which are judged on the non-virtual level.
– Which go programs do you think tend to be more efficient in their use of the host machine’s computational resources, the ones running inside N levels of simulation and handicapped (or at least slowed) by all those levels of simulation overhead, or the programs running directly on the host hardware?
– If playing go were intrinsically a conscious process, which of these go programs do you think would be more likely to be getting more conscious experience?
– In a world in which agents contend for resources (whether by ‘a well-regulated free market,’ by brute force, or by some of both) which programs do you think would be more likely to win a contention for resources? Will the agents running directly on the host hardware probably win almost all of the resources for themselves? (And if they spawn, for their relatives?)
The main reason that some computing takes place in VMs on my computer is that I am an oppressive leader who likes to exile untrusted entities to limited VMs where it’s harder for them to take resources from my more loyal subjects. 🙂 So if most conscious experience is occurring inside simulated universes, does that mean that the host universe is an oppressive dictatorship? 🙂
I left another comment that seems to have disappeared. I’m sorry if this shows up as a duplicate.
Do you think consciousness should in principle be either fully intelligible (in higher-level universes) or not at all (in lower-level ones)? Or is there some in-between that could apply in intermediate-level simulations?
Does it matter if the Simulation Argument is true or not ?
1). Is there any way to find out, in principle, whether our Universe is a simulation ? I’m not talking about purely philosophical arguments; is there a way to collect some sort of evidence regarding the proposition ?
2). Assuming that there is a way, can this be done reasonably soon — say, in the next 10,000 years or so ?
3). Assuming the answer to the above is “yes and yes”, would knowing the answer change any of the decisions you would rationally be able to make ?
Personally, I would answer “no” to all of these, so I kind of don’t care. Sure, it’s an interesting idea to speculate about; but then, so is conventional religion…
We observe an event that contradicts the laws of physics.
It could happen at any time.
Probably not, but knowing that the universe is a simulation would add a different characteristic to our trying to explore it.
We’d try to think of ways to understand the intentions and purposes of the intelligence running the simulation.
We’d have a possible explanation for the great filter – we’re the only life in the universe because we’re the focal point of the simulation.
We could try holding ourselves to ransom. We could aim a galaxy gun at our civilisation and scrawl a message on the stars: TALK TO US OR WE KILL OURSELVES AND SCREW UP YOUR SIMULATION. Even as a bluff, it would be fun to try.
Religion would get a huge shot in the arm.
We’d start to think more about teleology, and grand purposes.
“We observe an event that contradicts the laws of physics.”
So astronomers observe the orbit of Mercury differs from that predicted by Newton’s laws. Or, physicists observe that they can’t predict when a single atom will undergo radioactive decay. But they don’t conclude they live in a simulation; they conclude there’re deeper laws of physics remaining to be discovered.
I see what you’re getting at here – but it can be hard to tell what is or isn’t a glitch in the simulation. If someone argued that the whole of quantum mechanics counts (given that it’s unpredictable and just the sort of thing which might be implemented as a calculation-simplifying kludge), I actually think it’d be a fair argument.
I think any discrepancy in the laws of physics should make one update towards a simulated universe, if only slightly. This is different to “completely abandon naturalism at the smallest pretext”.
Though of course, I could imagine breaches of the laws of physics that would make me immediately believe in a simulated universe. Like someone activating a video-game cheat code in real life.
I think the discrepancy between quantum physics and gravity is a better argument for us being in a simulation than consciousness. The latter problem is mysterious but that’s its only problem. We’ve only been directly looking at the brain for a few decades now, it’s not that surprising that we haven’t figured out everything. The former is much more serious. There is a deep contradiction that hasn’t been resolved by the smartest people in the world. Recently, we’ve hit the “nightmare scenario” where the experiments are confirming the standard model but not telling us anything new. It’s too early to declare that the problem is because we are in a simulation but it’s not too early to suggest it.
Could you expand on the “nightmare scenario”? I’m aware of the contradiction between quantum physics and gravity (in broad terms), but not of the current state of the field.
The “nightmare scenario” refers specifically to the fact that all the high-energy experiments at CERN have not revealed any unexpected physics. Everything that the LHC found was predicted already. Thus, we can’t rule out any models that hadn’t already been ruled out, and we don’t have any novel clues as to possible better models.
Evan, you can read more about it here.
That happens all the time. That’s how the laws of physics get discovered. Unless, perhaps, you are implying that our current understanding of physics is as good as it’s ever going to get ?
Right, but lots of things could happen at any time. What is the probability that we will get some solid evidence for the Simulation Argument during the next, say, 100 years ? Before you tell me the number, can you tell me how you arrived at that number ?
Assuming that we could get some solid evidence about the simulation, how is this different from what we’d do anyway ? In other words, how would this differ from exploring ordinary nature in ways that we do already ?
Same comment as above; in addition, you are assuming that this intelligence is sufficiently similar to ours so that we can even describe it as “intelligence”. I see no reason for this.
I see no reason to assume this.
For all you know, you’ve already tried this 100 times, and each time the gods simply restored their last known good backup and fixed whatever security leak allowed you to figure out the Matrix.
Religion is doing just fine already, and there’s nothing stopping you from thinking about grand purposes at this very moment.
I was thinking of a event that seems impossible to reconcile with the laws of physics, regardless of how much knowledge we have.
Imagine I had a gun, and by saying “idkfa” I could make the gun hold infinite bullets. I doubt anybody would say “wow, a new level of physics!”
Honestly I don’t know how to get a number for that. It hasn’t happened in all of human history that we can tell, so it can’t be high.
I’m not a philosopher but isn’t nihilism’s bailiwick that there isn’t an intrinsic purpose or meaning to anything, other than what we give it? Wouldn’t it reverse a century of philosophical thought if we knew we were living in a giant celestial aquarium?
Existentialists believe that we can give subjective meaning to things, nihilists don’t.
Of course you can give things subjective meaning, it just doesn’t magically become objective.
I believe that nihilists would deny that even subjective meaning is a meaningful concept. Existentialists thrive on it.
I think that “idkfa” for me would mean ‘the universe is not explicable in terms that are as simple as I would like them to me’.
I don’t know how I could deduce anything further.
If the simulator is fully deterministic, then all the limitations of the simulator would appear to us as laws of physics.
As programmers sometimes say: “It’s a feature, not a bug”. After all, if it’s programmed, it works how it works. Judging that as wrong behavior requires one to step outside of the logic of the program to judge it. You can only say that there is a bug in an MRI machine that shows tumors wrongly, if you cut open the patient and don’t find the tumor. If you never get to cut open patients, you will never figure out that the bug exists. As there is no relationship between the program and the outside world in a simulator, there is no way for people within the program to ever realize that their laws of nature are just limitations of the simulator.
PS. We tend to dismiss the ‘noise’ that we get in scientific studies, so it’s actually possible that the simulator is glitching all the time and that we just dismiss it, because the glitches are statistically predictable and/or random.
And they answer back “Go right ahead, we’ve made the popcorn and we’re sitting in our comfy chairs”, so then what?
I mean, I once deliberately killed my way through an entire Dwarven stronghold, making sure to wipe out every single one, in a game. Simply because the game required me to steal a particular artefact so I had to kill what guards I couldn’t elude, and this time round when playing I decided “To hell with it, if I’m going to be a cold-blooded killer, I’m going all the way!”
The simulation-runners might feel that way in at least one of the play-throughs. Then when your civilisation has blown itself to kingdom come, they restore from a prior save and play on from that point, and if it looks like you’re going to build a galaxy-gun they adjust the simulation so that it never works, or the guy who thinks of the idea slips on a banana skin and breaks his neck the day before he persuades the world government to do this, or something.
Simulators that repeatedly reload us from the last save file would explain how we’ve seemingly not blown ourselves up yet.
(1) Yes, of course. The sky could open up and someone could come down and say “LOOK, I”M YOUR SIMULATOR. WATCH ME SPAWN ADDITIONAL PYLONS.”
(2) Yes, of course. The problem is that we can’t make it happen ourselves, any more than we can make Yellowstone erupt. It might happen at any time, but we can’t test it directly.
(3) Yes, of course. If I knew I was in a simulation I would not plan very far in the future because it could be shut down at any moment (and probably would be shut down before we began to colonize the universe and consume massive amounts of resources) and so I would be less concerned about x-risk.
(1) Yes, sorry, I phrased that wrong. Is there anything you can do to procure the evidence, as opposed to just waiting for Tassadar to warp in those pylons ? For example, if we suspect that a mass-carrying particle might exist, we don’t wait for Tassadar, we construct additional LHCs.
(2) Right, but then, shouldn’t you be a Christian, a Muslim, a Buddhist, or a Raelian, possibly simultaneously ? After all, evidence for all of these religions (and many others) could appear at any moment…
Says who ? You are implying that you know a lot more things about the simulation than just the fact that it exists. Plus, even if this world is perfectly real, you could still die at any moment; yet, presumably, you still make plans.
Re: #2: I didn’t give you my reasons for taking the simulation hypothesis seriously; they certainly aren’t merely “it’s consistent with our evidence so far” because then as you say I’d have to believe all sorts of things.
Have you read the simulation argument? It gives us good reason to think we are in a simulation, whereas the arguments for the various religions are not as good in my opinion.
Underdetermination of theory by data. There are always infinitely many theories compatible with the data; one needs additional argumentation (and/or prior plausibility judgments) to decide between them.
Re: #3: Yes, I am implying that–though I wouldn’t use the word “know,” that’s too strong. Rather, I’m implying that I have *reason to believe that if I’m in a simulation, the world is more likely to end soon than if I’m not.* My rationale for this is that simulations take resources, and the amount of resources needed to simulate our civilization will scale with the size of the civilization, and moreover most of the motivations I can think of for making simulations in the first place achieve their goals fairly early on in the history of a simulation (i.e. before it stabilizes into a singleton and reaches its millionth anniversary of peaceful interstellar expansion). So whoever is simulating us will *probably* be *at least somewhat more likely to shut us down later in our history, and anyhow they’ll have at least a non-zero probability of shutting us down, which is enough for my purposes here. (Modulo concerns about them intervening to protect us from other disasters. I’m ignoring that here because it would only prove my point more.)
There seem to be two versions of imagined simulations:
(a) Our descendants are running historical(ish) simulations and we’re part of them.
In which case there’s damn-all we can do, as if we are all currently in an ancestor simulation, the ‘real people’ on whom we are based are all dead, they have performed the actions and deeds which created the history, and we have no choice in what we do as we’re only going to repeat their decisions (e.g. the two World Wars happened because they were historical re-creations and we really couldn’t – in the simulation – have avoided them because they already happened. Think of all the war games that are out there; sure, somebody may run a campaign where the South won the American Civil War or Hitler’s Germany did indeed create a European Empire of the Master Race, but most campaigns are run straight through. And indeed, if we’re in a simulation where someone is running the “what would have happened if the Nazi Party did get off the ground and came to power in Germany?” simulation, we’re even more screwed).
(b) people from higher-level universes are running our univese as a simulation for reasons ranging from sociological research to “what would happen if I pressed this button?”
Okay, so we maybe have a bit more room here, since if we’re being set up as “under these conditions, what would happen?” means we can have some limited freedom of action. But again, we really don’t have any meaningful way of affecting our futures: once the experiment has yielded whatever data the researchers wanted, or when the players get bored of the game, or we all blow ourselves to kingdom come in some future disaster (maybe global warming is the big one here, and our simulators are watching in fascination and taking bets about how we’ll screw this up and when we’ll all die in the cataclysms resulting), that’s the end of that.
As a thought-experiment, it’s interesting. But as reality, it means that nothing means anything. Either our descendants or the higher-level universe could set up conditions any way they liked, from whatever physical laws govern our local area to the way culture and politics are going. You could say “But that’s how the universe already acts” but we don’t (at present) imagine The Universe is going to decide to make gravity work in a different way because the sims are getting too smart and learning too much about the set-up, whereas our descendants/higher-level universe can directly intervene to change parameters if they wish.
For everyone who can’t find it in themselves to believe in an interventionist God (“it’s unreasonable to perform miracles because that is interfering with how the universe works by the established physical laws”), how do descendants/higher-level simulators make any better sense?
The simulation argument has an effect on what is really real, and that has an effect on what is really valuable, since most people don’t like to value fake or virtual things over real ones. And what impacts values, impacts decision making. Although that isn’t evident from the standard less wrong model, where values are arbitrary, fixed and isolated from everything else.
Just because people value X, doesn’t mean they’re right. Most people value Justin Bieber, after all. More specifically, if I was given a steak that looked like a steak; tasted like a steak; sated my hunger like a steak; and in all other aspects functioned just like a real steak — then I wouldn’t care if it was “real” or not. If an alien jumped out from behind a tree and said “ha ha, tricked you, this isn’t a real steak at all, but just a bunch of atoms artificially arranged in a pattern identical to that of a real steak !”, my response would be a shrug.
That doesn’t generalise. Cubic zirconium doesn’t win hearts like diamond. Fake degrees don’t impress, and fighting orcs in a video game doesn’t make you a war hero. If simulationism is true, we are in a video game.
>ubic zirconium doesn’t win hearts like diamond. Fake degrees don’t impress,
Of and only if people know they’re fake.
>fighting orcs in a video game doesn’t make you a war hero.
Which people understand is fake more readily than the first two examples.
So you find out your in a simulation, most of what you have been strivig being for is unreal, your values change, and your region making changes. The argument goes through…knowing you are in a simulation would affect your behaviour.
Here’s my problem with this thesis, which is pretty hilarious.
To imagine “ground-level” simulations whose complexity (consciousness, whatever) is “teleported” into them from the simulator’s universe, one has to imagine a conduit of some kind. The conduit can be one-way (from the simulators into the simul…ee’s?) or it can be two-way. If it’s one-way, then it can’t be very dependent on the state of the simulated universe, at all. I would argue that a one-way conduit is not enough for “consciousness”, however we define it. But maybe it is?
A two-way conduit, however, opens up more possibilities for the simulated scientists. They can then interrogate the conduit and try to figure out how and why it responds the way it does. In fact, they may be able to hack the conduit if they’re smart enough to find out interesting computational properties about it. If the simulators aren’t paying attention, the simulated may even be able to hack the two-way conduit enough to fully understand how they are consciousness, and begin to make guesses about the functioning of things in the simulator universe. Eventually, they may be able to utilize the conduit to run simulations in the simulator universe by proxy, or something. (Minecraft in Minecraft always gives me a good existential giggle.)
So it seems to me that for ground-level to be truly ground level, the only way to guarantee it as the simulator is to either come up with a one-way type of “consciousness” which does not depend on the state of the simulated universe (which seems, to me, to be VERY hard) or to not allow complexity to reach the level of consciousness at all.
The question of how one builds a simulated universe with a kind of “conduit” is pretty interesting. As other commentariat have noted, it’s quite a lot like the problem a dualist God has when they are tying the “soul” to the “body”. Hoo hoo ha haa!
“Hacking the conduit” reminds me of our author’s interest in psychedelics.
“Hacking the conduit” reminds me of our author’s interest in psychedelics.
“So factor this prime number for me, then I’ll believe you really exist outside my universe!”
“Aw man, I’m a sociologist, not a mathematician! I’m not the one who’s going to do the number-crunching for when we write up this study! What do I tell our subject?”
“Just blather something mystical about universal love, okay? Besides, it’ll ruin the experiment if any of them discover we really do exist and they’re all in a simulation – this group are supposed to believe they’re experiencing ‘reality’, remember?”
A ground-level universe wouldn’t have to ban the existence of Turing machines that can be used for a full-scale simulation; it would only have to ban the existence of Turing machines that are being used for a full-scale simulation.
Most likely, inhabitants of the simulated universe would think everything is perfectly normal until they try to simulate a universe, at which point the computer in the outside universe that is running the simulation throws an exception because it can’t handle the load.
Or the technology that works in the lower universe is so slow that the higher level simulator can simulate it easily.
I mean, it’s not like there are limits in our universe that limit human growth….
Except the problem that we are rapidly running into a limit on how much we can shrink computer chips.
And except the issue that the universe is so sparse and big, that we can’t rapidly colonize it, limiting our population growth.
But I’m sure that these are just completely arbitrary limitations, not those designed into our universe by a simulation designer…..OR ARE THEY 😛
As I read it, Carroll’s version of the argument doesn’t have anything to do with a simulated universe being unable to support Turing machines. To clarify:
* It’s easy to construct things in any nontrivial universe that are apparent Turing machines, that could perform any computation if they were hypothetically given enough time and resources. Minecraft, Dwarf Fortress, Conway’s Life, every computer ever built, are examples of this kind of apparent Turing completeness. Since they are run in computers of bounded size, they are not actually Turing machines; they’re just very large finite-state machines that happen to run programs of some finite size.
* It’s (likely that it’s) not even possible to construct actual Turing machines in this universe, because this universe does not actually have unbounded time and resources.
So, a universe capable of simulating a civilization does not have to be Turing complete.
The argument Carroll presents is rather that further down the stack you run out of the “time and resources” necessary to support a civilization (i.e. the parent universe ends before the child universe can execute a civilization). It doesn’t rely on the hierarchy of automata (every universe might as well be an FSM) but rather the pigeonhole principle. If you live in a simulation that can take on N states and you build a machine, the machine can only take on M <= N states.
There’s nothing cheap about high fidelity simulation. There will never be anything cheap about high fidelity simulation. The rules of our universe seem terribly optimized for that sort of thing – you can spend arbitrary atoms trying to perfectly describe a single atom without ever perfectly describing it. There is nothing meaningful behind the idea that ever-increasing computing power means we are ever-more likely to be a simulation. In this universe, computation has ground-level costs that are very very high.
The idea that there’s another universe whose own rules obviate these issues is just a blind assertion. We can be as sure that it isn’t true as we can be sure of anything.
Yeah but what seems high-fidelity to us might not be high-fidelity on an absolute scale.
Imagine two NPCs in Skyrim talking. “There’s no way a computer could simulate us. We have 64,000 polygons each.”
Like I said – you’re positing an entirely different universe with different constraints than our own. There’s no evidence whatsoever to support this. It’s a total stab in the dark. One might as well propose that a few light years past the edge of the observable universe there’s a wriggling hairy penis taking up the volume of a supercluster. This is the kind of thing with the least imaginable likelihood of turning out to be useful. You could not come up with a more worthless proposition if you tried.
That’s an image that is now seared into my consciousness :-/
I don’t understand this whole discussion. Don’t we know directly that we’re not a ground-level universe? We have weather simulators, so we contain many simulated universes with nothing but low-resolution weather. Perhaps we’ve also created lots of low-resolution Mandelbrot set universes? If a universe needs to contain “consciousness” to “count” then we’ll need to define “consciousness” more carefully before I’ll have any idea what’s being discussed here.
I also don’t understand how one can determine whether one is simulating something or simulating a simulation of it. Those seem like the same thing to me, i.e. “simulate” seems like a monad. Therefore questions about the directed graph of simulations seem ill-posed.
I *also* also don’t understand why there seems to be an assumption that simulated universes will have a concept of “time” and that the simulation itself will run from the past to the future (in some reference frame?), rather than say from left to right. As far as we know, our universe has time-reversible physics (except you have to also reverse electrical charges, blah blah) and causation is just a statistical artifact of the fact that we live in a region of spacetime with an entropy gradient — there seems to be nothing special about “forward in time in our reference frame” from the perspective of the shortest possible complete description of our universe.
Running from past to future is identical to running from nearest a point to furthest away, isn’t it?
Step 2 in the simulation argument as brought in the original link:
Umm… why ?
Joscha Bach wrote something about this.
Ooh, that was nice.
It is not enough to simply count the total number of simulated and non-simulated conciousnesses. Simulation speed matters to. To see this point suppose that the following is true.
The non-simulated universe is expanding in such a way that at time 0 there is one simulation. At time 2^(2^k) each existing simulation is split into two distinct simulations by allowing the simulations to diverge, e.g., the result of an apparently random measurement turns out differently in each simulation and one extra step of each simulation is computed. In both the non-simulated universe and the simulated universe there are always 2^t people alive at time t and no one ever dies.
In the non-simulated universe at time 2^2^k there are exactly 2^2^k many non-simulated people. There are 2^k many simulated universes each containing 2^k many people meaning there are 2^(2*k) simulated people. Thus the fraction of simulated people is always less than min_k 2^(2*k)/(2^(2*k) + 2^2^k) < 1/3.
However, there will be continuum many (e.g. 2^alpeh0) simulated consciousnesses over the total lifetime of the universe (there is one such conciiousness for each branch on the binary tree) but only countably many non-simulated consciousnesses. Nevertheless, in such a situation it seems the correct inference is that one is more likely than not to be non-simulated.
After all given that one has an experience one knows it happens at some finite time in the non-simulated universe and whatever that time is there are at least 3 times as many non-simulated consciousnesses as simulated consciousnesses.
This is not offered as an analysis of how we should actually estimate merely a proof that the assumption that one can simply count the total number of simulated and non-simulated consciousnesses is incorrect.
“Suppose the universe is a simulation.” “Okay. What would that imply?” “I dunno.”
So what would we do differently if we were in a simulation, than if we weren’t? Would we try to contact the simulators and get them to give us a better deal? Would we make our own simulations and try to enter them to get a better deal?
At the very least, I would like to know why we were created.
Why would that matter? If it turns out we were a simulation for some kid’s science fair project, or a simulation to test X theory of sociology, what’d that mean to our lives?
In Christianity, God’s reason for creating us matters because (among other things) He’s the source of all being (rather than just our local part of the multiverse) and He’s good (rather than simply owning a Make-Your-Universe program). I suppose that if the simulator threatened to plop our simulated consciousnesses into torment or bribed us with a good afterlife, that might motivate us… but what’d just knowing get us, except answering our curiosity?
If it turns out we were a simulation for some kid’s science fair project, or a simulation to test X theory of sociology, what’d that mean to our lives?
I suppose it would matter because then we could ask the hard questions of theodicy: why the existence of evil? does free will actually exist?
So you could have simulated a universe where nobody is a criminal but you didn’t – why? It makes a difference if we’re in the position of lab rats – the sociologists have problems in their own universe that can crudely be modeled by how our societies break down under stress and they are observing to see if the data supports their theories about causes and solutions for their own problems. (years back I read an early Star Trek novel along those lines; a more advanced civilisation is found to be causing societal breakdown and adding stresses to cultures within the Federation and the reason turns out to be that they are facing an X-risk for their society if they don’t manage to solve that problem, and are using the less advanced Federation as testing grounds for solutions).
If we are told “yeah, but all the pain and suffering doesn’t really matter, it’s not really real, you’re just characters in a computer game” for the kid with the Build Your Own Universe kit, what would we think? Okay, so death really is the end and nothing matters and we are only good or bad because we’re written that way – no free will and no meaningful choice?
If it’s “we just wanted to see what would happen, we didn’t interfere, we let you evolve in your own way”, then do we hold our creators to account for that? Would we still hold to Henley’s Invictus
when in actuality we are not the masters of our fate and if the designers wish, they could change our characters and personalities so that instead of being “bloody but unbowed” we did cower and cringe and bewail our fates?
There was a strain of 19th century freethought which rather rejoiced in the notion of holding God to account if ever such a being existed; okay, so we do find our creators and can demand answers of them – what does that mean in reality, rather than in “I would prefer to go to Hell than worship such a God” rhetoric, where we have no feasible way of affecting the beings who can simply pull the plug on the whole simulation?
Would they regard us as the character in that story we read on here a while back – the gamer whose character found himself talking to another character in the game as if she really were conscious and trying to explain to her that no, she’s not real but he is and it turns out to be a prank by his girlfriend. If we complained to our simulators that we really suffered and were hurt in this simulated world, would they switch off the simulation and turn to the others and go “Great joke, guys! You really had me going there for a second, it was almost as if they really were conscious!”
If I’m in a simulation created by a 12 year old for a science fair, I think we can throw the “all-good” part of God out of the simulated window.
Theodicy becomes nothing.
Theodicy becomes nothing.
It does provide one solution to the problem of evil, viz. that it’s not a problem, it’s just that the kid was playing with simulated characters and nobody ever really suffered in any meaningful way. We’re the equivalent of raising tadpoles from frogspawn or growing crystals out of solution, and any wonkiness along the way is just random.
It may not be a great solution to the people involved (“look, millions of people on our planet starved to death in various famines and dying of hunger is not fun!”) but we don’t worry about the ‘pain’ suffered by computer game characters dying (in fact, in some games, we like as many gibs flying about the place as we can get) so our pain and suffering is meaningless and practically non-existent as it did not happen to anyone inhabiting the ‘real’ universe of the 12 year old and the science fair.
It’s not that no one was actually hurt, it’s that God isn’t “triple-omni”. If God is the equivalent of a kid with a magnifying glass setting ants on fire we don’t have to wonder how it’s possible that evil exists. Clearly that 12 year old isn’t all-good and so can easily be the source of evil (they also probably aren’t all-knowing or all-powerful, depending on how they have coded the simulation and some other factors).
If God isn’t “triple-omni” there are fairly easy answers to why evil can exist. Theodicy isn’t a problem for ancient Norse, Greeks and Romans, nor modern day, say, Hindus. It’s a problem for people that claim God is all-good, all-powerful, and all-knowing.
I don’t think I am particularly providing any information to you, as I think you know all this. I’m just trying to make sure my position is clear.
Just a note: for a Catholic like Deiseach or me (dunno about Evan), God is not the same as a simulator and the simulator’s existence still needs to be explained etc. Having embedded universes does raise interesting problems for many theodicies, like massively complicating the question of the evidential problem of evil I suppose, but it’s my understanding that it does not really threaten whether God is good or not.
If God is the equivalent of a kid with a magnifying glass setting ants on fire we don’t have to wonder how it’s possible that evil exists.
HeelBearCub – yes, if the “god” is a 12 year old playing with a Build Your Own Universe kit.
If the simulators are adults, and presumably much more intelligent than us adults (as required for their higher-level universe to be able to run the simulation of our universe – and that means their 12 year olds may be massively more intelligent than even our most intelligent humans, so that only kicks the problem of evil down the road a bit) – then they have to be asked “Why did you create us in a universe of suffering?”
If the answer is “your suffering doesn’t count because it’s not real because you’re not real persons, you’re only simulations”, that’s an answer. Then we would have to accept that we’re not really conscious and our experiences are meaningless.
Would we then continue to argue it out with the simulators? “Why did you sit back and watch as Dictator Z sent a million people to their deaths? Was that just an experiment for you?”
Could we argue that our experiences were indeed meaningful? How could we do so? Would we accept “Okay, when my child died, my grief wasn’t really deserving of sympathy because neither my child nor I are more than artificial entities in a fake model universe”, or would we say that even if that is so, our creators owed us the same duty of care as parents or even the keepers of pets?
Seeing as how we can argue over “do chickens have moral worth” and “is it cruel to keep an artificial intelligence locked up in a box”, these are the kinds of questions we might indeed pose.
“Because it’s the way the universe works” is not considered a satisfactory answer by many, so long as that universe can be held to have been made by a conscious intelligence. And even Hinduism does have a certain “Why do bad people and evil deeds prosper, why is this necessary?” level of query, because the Tridev are the ultimate powers of the universe beyond the gods, and (depending on whether you think Shiva or Vishnu, or even Shakti, is the Ultimate Cosmic Creative Principle) they can be questioned as to why this is, and depending on how satisfied or not you are with the answer that “This is how it is”, you may or may not think there is a theodicy problem.
Anyway, my main point is that if the simulators are merely beings of a higher-level universe, they can be held more accountable as they don’t get away with the Mystic It Is A Mystery answer; they presumably have had some analogue of suffering and pain in their pasts before they became the Glorious Utopian Post-Scarcity Empire that can simulate baby universes with conscious beings in them, so they know the problems. Then why didn’t they make us all nice, good, kind people who didn’t hurt one another but peacefully shared resources and advanced our scientific knowledge? What is the necessity there for us to be cruel to each other, since they could create any kind of conditions they liked in the simulation?
You would have to imagine that they were enjoying the show, and since they’re not gods as such, they get held to the same accountability as people running dog-fighting rings.
Again, I would never posit that any of the creators of a simulated universe, adult or otherwise, are all-good, and in fact I wouldn’t actually posit them as all-powerful or all-knowing. If a simulated being were to ask the question “why did you create us this way” the answer might be any numbers of things, but essentially would boil down to “because I wanted to see what happened”.
The consciousnesses should regard their creators in much the way the people in the Matrix regard the creators of the Matrix. Certainly not to be worshiped unless such worship actually brought actual transactional benefits in the way that burnt offerings were thought to.
In Bostrom’s theory, even this wouldn’t actually be possible as the creators are definitely not capable of knowing and evaluating everything (definitely not all knowing) that occurs. If an simulated individual life lasts for a few seconds, from birth to death, I am simply not capable of being aware of their entreaties. I might build some transactional response into the simulation if I chose to.
So definitely not all-knowing. And not all-powerful in the way interventionist God Christians think of God. More the clockwork God of Einstein, if a God at all.
I wouldn’t likely be incredibly smarter than the entity within the simulation either, as the whole point would to create ancestor simulations equal to our own intelligence.
I don’t think is actually really possible. But I don’t think it argues for thinking of a creator as a being to be worshiped.
Actually, a God that creates a top-level universe in which it is possible to nest arbitrarily many simulated universes in which sentient beings can be made to experience tremendous pain and loss and commit genocides against each other seems even worse to me than a God who just creates our universe alone. This seems like even more of a problem for theodicy. We may not even have free will and there is no ultimate purpose to having us experience all of this shittiness if it’s just for the amusement of random 12 year-olds in the top-level universe who quite possibly don’t even realize their simulations are sentient.
If a simulated being were to ask the question “why did you create us this way” the answer might be any numbers of things, but essentially would boil down to “because I wanted to see what happened”.
HeelBearCub – but that brings them nearer to us, and renders them more accountable.
Just as we would not find it acceptable if a parent was accused of neglecting their children, and the answer they made was “I did it to see what would happen”, even if they dressed it up as “I was running an experiment to study feral children”, so we would not find it acceptable if our simulators said “Just to see what would happen”.
Absolutely such beings are not to be worshipped! I’m certainly not saying they are as gods to us, because they’re not, they’re material beings of the same nature we (or the real world analogues of which we are the simulations) are. But if we are indeed sentient (and not just NPCs programmed to act as if we are sentient running on very advanced scripts), as Adam posits, then they are even more responsible for the suffering and misery we endure, avoidable suffering and misery. They could set up the parameters of the simulation to avoid disease or natural disasters or over-population; they could have designed us to have better traits (this is, after all, an objection often levied when talking about a Creator God – “I could have designed a better universe myself without the need for evil”).
So why didn’t they? We are entitled to ask them that, if they are merely Sufficiently Advanced Aliens or even more so if they are our own descendants. We hold those who own animals to a standard of not being cruel, even in the cases of animals intended for slaughter for food. Nobody says “Well, these pigs will be killed anyway, may as well spare the expense of treating them even as well as the minimum amount we already do”.
The only excuse the simulators will have is to come back and tell us that they didn’t know we were sentient or that we are not, in fact, sentient. Or that we’re the equivalent of watching a horror movie and the purpose of letting the simulation run wars, massacres and torture is the cathartic enjoyment of watching a horror movie with a serial killer who ingeniously tortures their victims.
We argue that animals which may not even be sapient but are sentient enough to feel pain should not be exposed to causes of pain; what are our simulators going to answer to that charge – that we are sentient enough to feel pain and fear and other strong negative emotions, so why did they impose those on us?
This is the flipside of the “creating an AI” argument – if it is possible for us to create an AI, can we create a true intelligence that is genuinely conscious? If we do, what duties do we have to it? Can we treat it as our servant/slave, or do we have to accord it the same rights as a sentient entity that humans enjoy?
The original question was “what import is it if we are running in a simulation?”
If we, us, you and I and everyone else on planet earth, are running in a simulation, it’s fairly clear that we can’t actually communicate with our creators to ask them anything. Nor has it become actually apparent that we are running in a simulation, so we can’t even try and deduce the answer to questions we might pose to them, and so we can’t actively oppose their wishes as we might if did.
So, if we are running in a simulation, I don’t find it particularly useful information. Better to care more about a single ant in the Andes than whomever may have created us.
I think an entity capable of creating new universe simulations would acknowledge the possibility that the beings in it were sentient, and almost certainly know whether or not said beings were. Even if they’re the equivalent of their society’s pre-teens. They may not care – who knows, maybe they’re at an intelligence level that’s so advanced and different from us that mere sentience and consciousness is not enough to make them give a damn. Humans, whether 12 or 52, understand that mosquitos are more intelligent than rocks, but the distinction at those levels rarely matters beyond academic interest in primitive nervous systems and attempts to predict behavior. To a very highly advanced being, we might be like simulated bugs; we’re at an intelligence level above inanimate objects, but outside of that fact making our behavior slightly more complicated to predict and model, they simply don’t care and we’re so far beneath them that our so-called intelligence is of a different and far more primitive type.
Similarly, even as a simulation we’re (possibly) capable of affecting the simulators – the problem is that it’s very hard and we have no way of knowing how to do it. Can humans be affected by what happens in our own simulations? Sure. Can the things we’re simulating figure this out and do it deliberately? Nope. Unless sentience is a hard minimum level of intelligence for making detailed mental models of other beings (this seems at least somewhat plausible) AND it’s possible at least in theory for sapient/sentient beings to figure out how to affect any other, even if they’re at a far more advanced level (seems a lot less likely), any effect we manage to have on our creators would be by chance.
Robin Hanson: How To Live In A Simulation
The thing that gets me about this argument is the idea of simulating a “ground universe” that _doesn’t_ have the ability to simulate Turing complete computers. Presumably the universe one step above the “ground universe” _can_ run Turing complete computers. Given this, I don’t know how it _couldn’t_ simulate a Turing complete computer – any Turing complete system can run any other Turing complete system.
I think that the idea of a ‘ground’ universe is nonsense. Am I missing something?
It could, but it doesn’t.
I should point out here that you’re using really nonstandard terminology — by which I mean, about exactly backwards. Ordinarily, the “ground-level reality” — there’s only one! — is, y’know, the actual reality that is not simulated. It’s the ground level, it’s not built on top of anything else, it’s not supported by anything else. If you run a simulation, you are building it on top of your own world; your world acts as substrate for it.
Alphaceph has already pointed this out, but, uh, this seems worth repeating.
Indeed, it’s not immediately obvious to me whether simulations need to ultimately terminate at what you’re calling a “ground-level” universe, since you’re defining it as one that cannot simulate others, rather than merely one that does not. And I don’t think the simulation argument does rely on this distinction; I’ve never heard any version that does, certainly.
(Have you considered just reversing any spatial metaphors you use before posting? 😛 )
(Edit: …OK, I just read the link and see that Carroll reverses this too, so I guess that’s where you got that from. Also, Carroll’s argument does depend on this distinction, so it’s not coming out of nowhere. Still, I think it’s a mistake to attribute this distinction to the simulation argument per se, rather than Carroll’s post. And to use such confusing terminology…)
(Also edit: Carroll also seems to have other things wrong about the simulation argument. For instance, the simulation argument itself doesn’t argue that we live in a simulation, but for a disjunction of that and two otherthings.)
There’s a Beatles song We All Live in a Yellow Simulation.
Is this assuming that we have to be able to understand consciousness to produce it? That certainly seems false (see: reproduction). If you’re saying that consciousness must be possible within this universe’s laws for us to simulate it, that still seems false. I could make a simulation with all sorts of counterfactual physical rules. Mightn’t one of those open the door to consciousness, whatever it may be?
The Singularity promises to make you immortal. The Singularity believers should adopt the Spaghetti Monster as their God.
No, they’ve already got Future AI God, or (in some denominations) Present AI God In Simulators’ Universe.
My complaints with the original argument: I’m not convinced that we have strong evidence that we aren’t in a ground-level simulation, or close-to-ground-level enough that it doesn’t matter.
I think we are at best maybe 10-100 steps from ground level, depending on how much of the universe we allow devoted to simulation, which is an important question. Our universe (let’s call it U_n) has some finite amount of computation C_n available to it (due to entropy) , and one of the base assumptions (which I think is correct!) is that C_n is not sufficient to simulate U_n (at real speed), or C_n < Sim(U_n), where Sim(X) is the amount of computation required to simulate X.
I want to take it further, though, and claim that the loss is not a constant dropoff, but linear (actually, I think it might be worse than linear, but I think there's strong evidence that it is at least linear and that's all I need for this argument). Thus, we have that C_n/Sim(U_n) = R, where R is some number less than 1 and is a rough measure of simulation-entropy, or the amount of computing power that is lost while emulating. There are a lot of cute ways we could claim to derive what R is (the ratio of console hardware flops to the amount of flops required to emulate that console on a PC would be a fun one – a lazy eyeball using Moore's Law and the N64 gives ~6 years, so 1/8), they'd all be arbitrary and bullshit so I'm just going to use R = .1 because (a) .1 is a good baseline for food chain loss of efficiency, (b) simulating a universe is hard there are a lot of moving pieces, and (c) honestly it doesn't matter too much as long as it's not like .9999 and there's no way reality is that efficient.
So, if we gathered all the mass and energy in U_n we could simulate a universe – let's call it U_(n+1) – that requires 10% of the computing power ours does. This universe could still support conscious life and observers: we're not using 10% of the universe right now. They could do the same thing we did, and so on: the ratio of the mass of the solar system to the mass of the universe is about 6 * 10^-25 (according to wolfram alpha), so this cycle could probably go on just fine for a long time. When we get to U_(n+26), C_(n+26) = C_n * 10^-26, so we can no longer simulate the entire solar system. There are corners we could cut, and a lot of them: run the simulation slower, fake a lot of the relevant details, etc etc. These things could save us a lot of power. But by U_(n+56), you only have enough power to simulate (at the fidelity of our universe, in real-time) 168 grams, or the weight of a Samsung Galaxy Note 3. The Note 3 is maybe not the perfect computation device, but I don't think we could make a computer at that weight capable of simulating an observer, so that will be our cutoff point. So without any tricks and assuming all our other assumptions hold, we are at best about 50 levels of simulation from the bottom.
Maybe with tricks we can add more, but each trick that gives us a 10x simulation speedup is only going to give us 1 more layer, and there's a finite amount of tricks. Even slowing down the rate at which the simulation runs can only help us so much. If time in the simulation only passed at 10^(-56) the rate it does here, U_(n+56) could be as big as ours, but it wouldn't even be able to run for a year before every proton and neutron in U_(n) decayed into photons (which will happen at 10^40 years according to wikipedia), at which point we have probably spent all (or at least almost all) of C_n. Still, let's be super optimistic and say that using all sorts of incredible tricks, we manage to improve efficiency by 10^100 via some combination of slowdown, only simulating parts that matter, faking history, etc etc. Then the universe-stack below us could go all the way down to U_(n+156) before we hit my Samsung Galaxy Note 3 Hard Limit Of Consciousness. Nice! That actually puts us pretty high up if we ignore that there are an arbitrarily large number of things above us.
But there's a lot of important caveats here. Firstly, we to turn the universe into a computer. This requires a lot of work and also is not an allocation of resources that most people would support. I think that a universe U_x where all of C_x is being used for simulation could probably exist (Singularity-AI told to simulate things, turns universe into computronium to do so), but I think that they would be extremely rare. If the simulation runner in U_(x-1) is not a simulation-running singularity AI, then the simulation runner will notice that layered emulation is a very stupid use of all these resources they have gathered, and would at some point opt to terminate the simulation of U_x (but perhaps not before saving the U_(x+2) being simulated, maybe running it directly). This saves us unless U_0 is being run by such an AI, at which point there is no higher power willing to terminate the simulation, and we might imagine a chain of universes in which civilizations form, bloom, and then make the mistake of irreversibly transforming their entire reality into a computer designed to simulate a shittier one instead. (This could make a fascinating sci-fi setting, maybe.)
But to get that kind of thing happening, every singularity-AI would need to be given a very specific command. It’s more likely that they’d accidentally turn a universe into paperclips than computionium (according to our shared lore): “Simulate a universe as best you can” might not be the rarest possible seed-command, but it certainly isn’t literally the only one that could be given, and if that happens once, the entire stack is dead; all of C_x is wasted and there can never be a C_(x+1). If we appeal to random-observerness (which is how we got here in the first place), we’re going to find that it would be extremely unlikely for us to be part of such a stack at all: multiverses where a singularity-apocalypse took place at U_0 and every universe below it will, at any given time, have very few observers. A single U_0 singularity-apocalypse might happen, but at some point (and probably not very deep in the chain), there have to be people or people-ish things running the show. And at that point you are going to run into trouble on the order of “most people would prefer to exist than to be a computer”, and “most people would rather have really sweet toys and live forever than build a better computer with those same resources”, or even “most people would prefer using the computronium core of the universe to simulate themselves / each other than to preform a massive high-fidelity simulation of another universe.”
This is the second problem: to get that R=.1, you have to spend all of your C in one place. So one of the assumptions that lead to us believing that we should be in a ground or near-ground universe requires that people run a lot of simulations, instead of just one: otherwise it’s not hard to believe that we live in U_n, the lower limit is U_(n+156) (The Samsung Galaxy Note 3 Hard Limit Of Consciousness), and maybe there are another 2-400 universes above us, but not so many that we wouldn’t expect to be where we are as a random observer. That said, I don’t think we should look at it like this, because I don’t think the norm would be that Universes tend to spend all of their C in one place.
If we assume instead that people are going to want to run many simulations, things get more interesting. We go back to the “we should assume we are ground or near-ground” assumptions, but “near-ground” starts to look a lot different. Let’s say that every civilization decides that they want to virtualize themselves to save energy and live forever + all the other benefits, so they computionium the universe. Let’s say they then set aside 1% of their power for people to run simulations on, and 99% for the rest of them to live in glorious virtualtopia. (Or maybe the 1% is people in the virtualtopia deciding to spend their power on a simulation of another universe rather than Unlimited Hedon Works). Some civilizations might just run 1 simulation, but a far more likely scenario is people running many, as various kinds of experiments. Let’s be boring and say that at any given time, 1000 of these simulations are running, though in practice that could be much, much higher. Now the effective value of R has gone from our base .1 to 10^-6. The number of possible universes below us (once we computroniumize U_n) drops from 156 to 26. There are a lot more of them (1000^26 is a lot of universes!) but the fidelity drop-off happens super quickly: by the time we get to U_(n+2) can’t even simulate the entire Milky Way (at our fidelity), and the U_(n+4)s have to start cutting corners on just a solar system.
And maybe if you pick a random observer, it’s rather unlikely that we are where we are. But we’re not actually that high on the chain, and if not everyone is running simulations with their resources (or even getting as far as a universe computer, which we can’t even do anymore because only 3% of the observable universe would be reachable if we started gathering now at the speed of light), then the odds increase. The odds aren’t low enough that we can reject the premises outright. And finally, we do see signs of being near the ground: we have a lot of computational power and have put a lot of work into it, but have failed to make something we would consider conscious. At 5+ levels above us, the entire Milky Way could be simulated on a personal device, a device which could only be described as a (Samsung) Galaxy (Note 3).
I suspect that if you are in a simulation, you won’t necessarily be able to tell if it’s “ground-level” or not.
For example, let’s assume that for economic reasons, a simulation is likely to be very small and focused on some person or event of interest. Let’s say my child’s life is what is of interest to the simulators. Then I and my wife and a few other people are simulated in great detail, everyone else my child encounters is given some crude NPC backstory (fleshed out on the fly if my child starts to get to know the person) and places we will never visit, like say central Europe, exist only as headlines in our news feeds. The city in which we live ceases to be simulated when we go into our apartment, and reappears in some statistically modified way when we go back outside. When we interact with a solid object it’s not modelled at the atomic level but at something like the micron level or higher, because a human couldn’t tell the difference. etc. etc.
This is obviously a “ground-level” simulation but it does not look like one from the inside.
Yep. This general form of argument seems like a very good argument for solipsism to me. There should be far more simulations of single persons with sufficiently fleshed out surroundings to convince them that they inhabit a complete universe than simulations of actual complete universes, so I’m way more likely to be in the former.
A meta point: If anyone doubts that it’s possible for an amateur to make meaningful progress on important intellectual questions, note that the entire discussion was sparked by a junior conference attendee, who found an important issue with an argument made by Nick Freaking Bostrom, and his idea was covered by Sean Carroll and SlateStarCodex.
What makes you convinced we’re making meaning progress here?
On a related note, is there any half way convincing arguments that says one shouldn’t believe they are conscious?(Yes, I know by saying “believe” I am implying they are conscious but it’s difficult to state the idea without doing so.)
There is no you, rather there is a brain that has lots and lots of subsystems. You would not say that the two of us (Wrong Species and Miller) form a conscious unit, similarly you shouldn’t think that “your” brain’s systems come together to form a conscious unit. See https://en.wikipedia.org/wiki/Split-brain
Maybe there are subsystems but those subsystems have some kind of conscious property. Regardless of where consciousness comes from, I don’t see how this possibility denies it’s existence.
>There is no you
who“what” am I speaking to, exactly, if what’s true for one is true for another, and to who or what does “James D. Miller” reply?
Please understand: as a layman, these claims sound absurd since they necessarily violate my everyday experience of experiencing consciousness.
I don’t know if you guys read his linked article, but what it’s saying is that studies of split brains seem to indicate that the human brain can simultaneously house and express many independent observer views. For instance, they showed cards of different colors to each eye of a person with a split brain, asked them what they saw, and they would verbally report one color while writing down another, indicating that the parts of the brain responsible for speech and for writing can each act as if they believe they are “James D. Miller” while nonetheless holding mutually exclusive sets of beliefs.
The point isn’t really that consciousness doesn’t exist. It’s that the self doesn’t exist. It’s a contingent and temporary consensus of independent subsystems, each of which may become more or less dominant at different times. At least, that’s one theory. It’s at least possible that a person without a split brain is always effectively acting as one unified whole, but if you split the brain, there is sufficient redundancy that each part can achieve on its own what the whole used to achieve.
However, the “no self” hypothesis also pretty elegantly explains many other things, such as apparent internal conflicts, behavior that contradicts stated beliefs, behaviors and beliefs that are inconsistent with respect to time. Sort of a brain as legislature effect. Congress may ultimately speak with one voice, but in reality it’s a whole bunch of conflicting agendas, beliefs, and preferences that are constantly vying for control with temporary alliances forming and winning for short periods of time and then getting overthrown by different alliances. But from the point of view of someone who just has to follow laws, the legislature can be treated as a single body.
Similarly, you can address “James D. Miller” and whichever contingent alliance of independent brain components is currently and temporarily in charge of the communication center will always answer to that name.
A lot of this has been part of Dan Dennett’s view for a long time.
Yeah, I believe he called it “multiple drafts” theory, right? The idea that there are always many drafts of your own narrative experience present in your brain, and which gets presented by various IO components is at least somewhat arbitrary, and this also explains why people’s memories are often inconsistent with their own earlier statements. The draft presented to the memory is different from the draft presented to whoever you were talking to earlier.
I find a better description to be “there is a you, but it may not be exactly as you thought it was.” This applies to Dennett’s multiple drafts model too. The trick is not to inflate the self into a Self (capital S).
If someone claims to be able to see red, he can’t describe to me what red is like, so I can’t know that I’m perceiving the same thing as him. But we can both look at object A, say “that is what red means”, and then independently come to the conclusion that object B must also be red and object C is not. Comparing our conclusions shows that ‘red’ is a real thing, since we can use it to make predictions about what the other person sees as red.
This doesn’t work for consciousness. He and I can only observe one mind each, and it isn’t the same one. So when he says “I am conscious” I have no way to know if I am too–he can’t communicate what it means to perceive consciousness, *and* the type of verification that I can do for “red” is impossible, since we can’t observe any of the same minds in the way that we can observe the same red objects. This makes me unable to figure out if, when he says “I am conscious”, I have the same thing that he is describing.
This just seems like a restatement of The Problem of Other Minds. Sure, I don’t know that I am an experiencing consciousness in the same way you are, but I still know that I am capable of thought simply because I’m thinking about it.
How do I know that I am thinking? You can’t give me a description of thinking that I can look at and say “sure enough that’s what I got”. You can’t give me any examples of thinking that I can compare it to either. It’s the same problem except for thinking instead of consciousness.
“You can’t give me a description of thinking that I can look at and say “sure enough that’s what I got”.”
We not be able to know exactly what other people are experiencing but surely this is false. Just because you don’t know exactly what my experience with thinking is doesn’t mean you don’t understand the concept. We all know the concept because it comes naturally to us. I still don’t understand why you think this is a problem. I know that I have consciousness. I don’t need to compare experiences with others to prove it. And even if everyone had a completely different sense of consciousness then I do, then it still wouldn’t disprove my own ability to be conscious.
Referring to “the concept” is begging the question, because it implies that there is a single concept and that both of us are talking about this concept.
It would certainly disprove the idea that you have the same thing that someone else means by “conscious”.
“It would certainly disprove the idea that you have the same thing that someone else means by “conscious”.”
That wasn’t my question though. It’s a completely different one. My question is “This is what I understand as consciousness. I believe that I have it. Is there any reason to doubt that?”. I’m not interested in how other people define consciousness and for the purposes of this question, I don’t care what they experience.
is there any half way convincing arguments that says one shouldn’t believe they are conscious
The Boltzmann Brain problem is relevant. Wikipedia, Sean Carroll
As far as I know no one actually believes that they are a Boltzmann brain, but it’s an interesting idea to wrestle with.
I think the notion of “consciousness” is a confused mixture of several different concepts and implicit assumptions, and prefer not to use it when discussing the associated philosophical issues.
Let make an analogy with the idea of “God”: The notion of God is typically associated with a set of concrete properties, such as performing miracles, listening to prayers, judging us in the afterlife, and being all-powerful. When atheists criticize the notion of God, they typically complain that these concrete properties are implausible. A common apologetic reply is to say that the atheist is only addressing a naive conception of God, and actually some of those properties aren’t actually true or only apply in a much more nuanced way. My thoughts on this answer is that while the apologist claims to have a more sophisticated view of God, its unclear what this view actually is; the specific explanations tend to be confused, sometimes in ways where it’s very difficult to argue they’re confused, or describe something lacking so many of the concrete properties of God I don’t see why the object they describe deserves the name.
Consciousness, too, is associated with a loose collection of concrete properties as well as a set of confused philosophical ideas combining these properties. However, these concrete properties are of a different nature: Being awake, being human or at least an animal, thinking things, feeling things, seeing things, being able to control a body. While the properties associated to God seem unreal, the properties associated with consciousness all seem real. So if forced to give a yes-no answer, I would rather say I disbelieve in God and believe that I am conscious, but a better answer is that I can’t interpret the question in a clear way.
At this point, its an annoying debate to me.
This idea strikes at the heart of it.
What’s the real difference between this place and a simulation above this? Both levels must be bags of math moving around.
At this level, we have no idea what a simulation even really means at the levels we are talking about.
Theres no “must” about it. The fact that the universe is describable mathematically doesn’t mean it is ontologically mathematical. That woukd be a classic map/territory confusion
It tautologically must be mathematical, merely by existing, or even not existing. Mathematics is fundamental. Somehow, we arise from the infinite set of axioms.
Though when you add the current world with the permutations possible in the nervous system, its hard to say what does not exist in a meaningful sense of existing.
Or, in a sense, pink elephants dancing on the skin have probably existed in apparent realness to someone.
No, not really
Sometimes I wonder if the people who posit that we live in a simulation have ever done one.
It seems to me that the fundamental argument (one universe can give rise to many simulated ones) is true only if we don’t take into account the level of detail or the scope of the simulated universes. Every level of simulation brings with it a certain overhead, which can be stupendous, based on the laws of physics in the simulation and the scale of the universe.
We happen to live in a universe where the laws of physics are tremendously hard to simulate, at least for us – partial differential equations in an curved space-time continuum, holy shit, we have to cut corners even if we’re dealing with a handful of elementary particles, but there doesn’t seen to be any corners-cutting going on in the results of our experiments.
Also, we have to think about motives. You usually run simulations to observe and test specific things, and you set up the rules and scope to isolate these things. If I were a scientist in a top-level universe, I would probably hesitate to turn in a grant proposal that runs like “we propose to turn a large fraction of our universe into a supercomputer for a simulation. We plan on setting up a universe with 10^80 particles with ridiculously complex interaction rules, run it without supervision for 10^10 simulation years, and see if anything interesting eventually happens in some remote corner of that universe.”
You assume that 10^80 and 10^10 are large numbers in some “absolute” sense. Actually, to the simulators they may be so trivial that our universe is not a serious experiment at all, it’s a free app.
They are definitely large numbers in relation to the number of interesting things that happen (at least from our human vantage point). There are gazillions of stars that pretty much all do the same thing (fuse hydrogen, until they don’t anymore), maybe with a bunch of lifeless gas blobs or lumps of rock for planets. Why bother to simulate them all?
Now you say, ha, but for our omnipotent simulators, it’s not a big deal, at all. I don’t think that’s a valid leap. The whole simulation argument starts from the experience in our level of reality, which is that setting up simulations is possible. But then extrapolating from that, but dropping a second aspect of our experience (setting up meaningful simulations takes resources, planning, and hard work) and a third aspect (simulations that we set up have a purpose and a focus, and restrict themselves to the scale of interest, whereas our univere doesn’t look purposefully designed and is mindbogglingly vast, detailed and multi-scaled) ends up in the realm of bullshit IMO.
You cannot say there are “gazillions of stars” without ruling out the simulation hypothesis. They are the easiest bit to fake. Secondly the only assumption we need to make in order to set up the simulation argument, is that simulation is possible. We don’t necessarily have to derive that assumption from experience – Plato could in principle have formulated the argument, despite not knowing about Minecraft. It depends what you mean by “simulation” and “experience”. But even if we did, one bit of experience does not entail another. We live in a world where simulation is possible, and sweet carbonated drinks are popular. It is not bullshit to suggest that this does not imply that the simulators are Coke drinkers.
“They are the easiest bit to fake.” Well, why fake them at all? Why not run a simulation with an isolated cluster of stars? And if you fake them, why fake them at that level of detail? With novae, supernovae, pulsating stars, quasars, double stars, accretion disks around black holes, gravitational lens effects around galaxies yada yada?
The whole simulation idea is treading very close to “GAWD put the dinosaurs bones in the earth to TEST OUR FAITH!” terrain.
“Secondly the only assumption we need to make in order to set up the simulation argument, is that simulation is possible.” Actually, for it to work, you also need the assumption that simulation is easier (requires fewer resources) than the real thing, in some sense. And in our experience, and probably also in theory, it isn’t, if you maintain the same level of detail.
> “They are the easiest bit to fake.”
I mean, it could be that there is a single premade skybox for all 1000 simulations.
This is a really good expression of the consequences of my main objection to the idea of our universe being a simulation.
Also, surely this explains quantum physics/relativity. They are also “black boxes” that have perfectly good explanations in the higher universe, but look like nonsense to the scientists in the ground universes.
Funnily, I would rather argue that quantum physics is good evidence that we are not living in a simulation, since anyone bothering to simulate a universe would implement some more sensible rules.
That’s a cute argument, but I feel like the simulation argument has a more fundamental flaw, which is that everything we know about simulations and the sort of people who run simulations and, well, everything, comes from within the world. It doesn’t seem like we can meaningfully reason about any meta-world within which our object world is a simulation, because we have no principled reason to expect the things we know based on this world to apply to such a meta-world. Various objections to this might be raised, but they will also be based on the object world and it’s hard to see how you get around this problem.
We know that people like us would tend to simulate people like us. Therefore, if our universe is top reality, there are going to be many simulated humans, so we should be puzzled to be real. If our universe is not top reality, then we’re simulations already.
When have people like us ever simulated people like us? Once we’re actually running a thousand million billion simulated Earths, that argument might hold water, but it seems like you’re borrowing paradoxes from the future.
I guess there is an argument that it’s puzzling we’re not characters in a novel given how many novels have been printed?
Surely if we go by frequency of persons likely to ever be simulated, we’re all sex dolls.
More generally, a quick survey will show that approximately 99% of all computronium everywhere in time and space will be used for pornography. Therefore, if you live in a simulated universe, you live in a pornoverse.
Looking at my fellow Earthlings, I am quite confident that we are not in anyone’s idea of a pornoverse, and therefore we are with 99% confidence not in a simulated universe. QED, and apologies to the future residents of the simulated pornoverses we will eventually create.
So basically the described “ground level” universes are a lot like MMO game worlds. Any entities are either unconscious or “beamed” from above. Well actually… by which criteria our game worlds are not universe simulations? Do they need the players to forget they’re in a game/simulation?
Note: We can’t actually create Turing Machines in this universe, only a limited set of Finite Automata.
Also, what exactly a ‘universe’ is seems like a pretty important caveat. Is there a null universe, containing nothing? In that case it’s the only ‘ground-level’ universe, and every other is capable of simulating it, so long as ‘ability to simulate’ is some sort of homomorphism from the structure of one universe to another.
If we bar such trivial cases, we end up with a pretty arbitrary line in the sand, where “a universe” comes to mean “structure similar enough to the one we occupy that it doesn’t make us feel awkward”.
This argument has a major unstated premise. Consciousness is less mechanically complex than simulation. It assumes that it is possible to have a universe that has the mechanics for consciousness that does not have the mechanics for simulation. This seems unlikely.
I assume that all a simulation needs is memory and simple logical operations. We, as conscious beings possess both. We can remember things. We can also perform logical operations (AND, OR, XOR, NAND). So, our universe contains all the mechanics necessary to simulate another universe.
To hypothesize a universe with consciousness without the mechanics required to simulate requires hypothesizing consciousnesses with no memory or no ability to even calculate an XOR function. Those would not be recognizable as consciousnesses.
Bostrom’s argument breaks down as soon as the simulation is not able to produce consciousness. It’s argument about how we should regard our own consciousness. As such, any simulations which cannot themselves further simulate such that those simulated entities are conscious are the ground floor, even if they can simulate something.
Scott’s argument is that having our simulations fail to seem conscious themselves is evidence of being on the ground floor.
My response is that there is no reason to expect conscious seeming simulations to be impossible. If we have the mechanics of consciousness in our universe we also have the mechanics of simulating.
Assume that a pseudo-turing machine could simulate consciousness with the only change from a true turing machine is having a finite (but still incredibly large amount of memory). We can build machines like this. We can even simulate machines like this by hand. A ground level universe would not just have conscious simulations not work but all other things using the same mechanics not work.
There are several possible answers to the problem of our simulations not seeming conscious that don’t run into this problem.
1. We don’t have the technology to simulate consciousness effectively but the technology is possible. Those people saying that conscious simulations are impossible are like people saying that heavier than air flight is impossible in 1800. High likelihood.
2. Simulating consciousness may be too complex and resource intensive to do practically. It might require a computer the size of a planet. Or it might require simulating a universe at 1 billionth the speed of our own. Still impractical is difficult from impossible. Lower likelihood.
3. Consciousness may be a poorly understood concept, and people will declare that any simulation does not contain consciousness. Saying that a simulated world within ours contains no consciousness may just be declaring that all simulated worlds are P-zombie worlds. Regardless of the behavior exhibited, some people will declare that a simulated world contains no consciousness. Extremely high likelihood.
How, compared to these does “4. The nature of the universe precludes any simulation being conscious.” even rate as a reasonable possibility.
My problem with the simulation argument is, anything we infer from this universe about computation, physics, or the motives of beings who might want to simulate things, *can’t* be used to extrapolate into a completely different universe that might maybe contain a ‘simulation’ of ours.
It’s not just untestable. It’s incoherent.
If this universe is not base reality, it doesn’t tell us anything about what base reality is, even indirectly. There’s no *a priori* reason to think capable beings would mostly simulate universes like their own in any way.
All three of these are great.
I freaking love the third one.
To the mind of a child, there is probably a higher-level version of physics in which sexual desire makes sense. Actually, it makes sense to me, and I’m probably running on the same physics. This was shamelessly stolen from Steven Pinker (How the
Mind Works), who believes there might be Aliens (running on the physics of our universe) to whom consciousness is not a mystery.
“We can’t explain , THEREFORE different laws of physics” is not a very convincing argument, at least to me. But maybe we are using the expression “making sense of” in different way, so I ask instead:
(1) If you were the alien who found out that the brain was a featureless crystal, would you be convinced we are in a simulation?
(2) What would it take to convince you that the crystal doesn’t actually have a hidden structure (running in the same laws of physics) that is doing the computation?
> Our own consciousness is probably being run in a world that operates on that higher-level law.
If our consciousness runs in an outside world, why is it affected by drugs and such?
Simple – when you do some cocaine in the simulation, the automated machinery in the real (or higher-level) universe drops some cocaine into your brain-vat. That’s probably where the placebo effect could come from – the AI checking whether you are actually consuming some substance that should have an actual effect uses some simplified checking algorithm (that, for example, checks whether you believe the substance would have an effect on you, whether the substance is supposed to have the effect, etc), and if some checks are missing, it just dumps some minimal version of the effect on you so it doesn’t have to re-check everything.
Why would the simulation argument depend upon a two-level scale, rather than the more natural idea that each level of simulation would have fewer cycles in which to run simulations (and thus simulations that, from the perspective of the simulating level, run slower, but whose speed from the simulated level appears normal)? Planes that are at ground-level are only that way accidentally (i.e., because we haven’t made sufficiently complex simulations yet), and are perfectly capable of running turing machines.
The idea that in order to be ground-level a system would need to be constructed to be ground-level permanently is kind of absurd, and unnecessary, and backwards. Why do something complicated like outlawing turing machines and remote-hosting brains when you could just allow the simulation to grind away slowly, like every other simulation?
Rather than thinking about things in terms of leaves on the simulated-universe tree & how likely we are to be a leaf, we can consider the tree to be potentially arbitrarily deep, and ask how likely we are to be the root node rather than some intermediate one. Depending on how deep the tree is, the answer might be “not very likely”, and the tree grows deeper all the time if simulation is easy (which it appears to be).
I never understood how physicists could seriously entertain the simulation argument. Isn’t it just Intelligent Design under another name?
Intelligent Design is frequently criticized for misunderstanding or misrepresenting science (see the controversy over “irreducible complexity”). Simulation arguments generally don’t depend on that at all… although I guess Scott’s actually does.
True Turing machines, the ones with an infinite tape, are already impossible. It could be the case that all levels allow for bounded Turing machines, but at the ground level they are so bounded that they can’t simulate any other Turing machine.
I’m not sure this makes sense. I would say that whatever simulates the brains is part of the same universe where these brains reside.
I think that Carroll’s argument is flawed: if ground-level universes don’t allow for conscious beings, then by anthropic argument we don’t live in a ground-level universe.
But anyway, the simulation argument doesn’t not pass Occam’s razor, therefore it is not worth seriously considering.
Conservation of computronium indicates that most raw events (quantum state transitions or bit flips) cannot occur in ground-level universes because every such event in a ground-level universe requires a corresponding event in every higher-level universe. But that’s not where the simulation argument applies; we apply it to interesting events, not raw bit-flips. Specifically, we apply it to the operation of introspective consciousness. I ask the question, “Is this the Real World(tm)?”, and so it probably isn’t the Real World.
For this to hold in spite of conservation of computronium, the lower-level universes must be effectively compressed, with a much higher ratio of introspective consciousness to raw bit-flipping or quantum state transitions than the higher-level universe. We would then tend to recognize ground-state universes by their having been designed to facilitate compression and efficient use of computronium, e.g.:
The universe would be no larger than necessary to hold its supply of introspective consciousness in a sufficiently interesting environment
The universe would not operate for any great unnecessary period without introspective consciousness taking place.
The granularity of the universe would be as high as possible while still allowing introspective consciousness; each conscious entity would occupy the minimum number of grid cells or the like.
Probably most of the conscious entities in the ground-level universe would be engaging in interesting levels of introspective behavior (unless that’s not what the simulators care about, in which case none of them would).
In what ways does this model of a ground-level universe resemble the one we actually live in?
“In what ways does this model of a ground-level universe resemble the one we actually live in?”
I think you’re making the same point that I’m trying to express. My answer to your question would be,
Not at all.
If the purpose of the supposed simulation is to study conscious behaviour, it’s set up horribly.
And if the purpose isn’t to study conscious behavior, it’s still set up horribly because it’s got consciousness working at a level that’s likely to start mucking up whatever it is that the simulators are trying to study (if we haven’t done so already, e.g. if they are big on the artistic value of natural ecosystems).
I think we’re on the same page here, and I wish I’d caught your earlier posts before I penned mine. If this is a simulated universe, it’s either inefficient to the point of gross incompetence – without the coder ever having been caught in an overt mistake – or it is an impossibly baroque imitation of incompetence for the sake of faking out the sims. And the latter is an unfalsifiable hypothesis on the same level as God hiding dinosaur bones to fake out the scientists.
Creationism for nerds, to tide us over until the rapture for nerds. And if we haven’t made obeisance to our Robot and/or Basilisk overlords, hellfire and damnation for nerds.
Your two comments here are an excellent statement of my problems with this hypothesis. It always surprises me how little thought people seem to put into the Whys and Hows of running a simulated universe before postulating that we’re likely to be in one.
>And if the purpose isn’t to study conscious behavior,
Totally unnecessary. I’m a layperson, but it’s my layperson understanding that space and time (that is, the spacetime continuum) are kinda (?) the same thing, or describe the same thing, or what-have-you, and anyone studying conscious beings (and creating a simulation of space-time to run them around in) would have access to, what appear to us, to be all instances of time simultaneously. It would be like having access to all the slides of a film – you would not even need to run the movie through a projector to study individual frames; you could look at individual frames at your discretion.
Besides, we’re conscious (or at least I am), and based on my (possibly flawed) logic above, I surmise that consciousness of participants in the simulation is not a bug, but a deliberate feature, as it strikes me that having conscious beings in the simulator being permitted to experience time in a subjective manner would not grant any additional information to the simulation creatures, but would be informative to the creatures in the simulation.
We could be the “you won’t believe what crazy stuff consciousness can result in!” study, though.
“but we’re not p-zombies”
Wow, getting awfully presumptuous there…
It seems to me that if the civilization that is building a simulation have limited resources, the majority of the simulations built by it would be ground level by design. For example, let’s say I want to have a human-level intelligence as my partner in Minecraft. Well, what do I do?
– I can, in theory (let’s say for the sake of argument I actually can) build an emulated brain (down to quantum level) using redsone logic within Minecraft. That would be quite a piece of work, re-inventing the whole brain machinery, and it would take some MAJOR supercomputing system to run it afterwards. Probably not feasible within my lifetime.
– I can develop a human-level AI that runs on a separate piece of specially-designed hardware and use it to remotely run the second character in Minecraft. I’ve heard somewhere that the most powerful supercomputer nowadays can about as much (maybe more) calculations per second than a human brain, but I’ll still need to hire a team of best programmers ever, and wait for 10-20 years for them to develop the software. So yeah, maybe feasible, but not for me, I am most definitely not THAT rich, and not that patient. Wait a second, are there any other ways to do it?..
– Beep… Beep… Beep… Hey, Johnny, mind coming over to my place tonight? Bring your laptop, we are going to play Minecraft! I’ve got BEEEeeeeEEEeeRRR!!!
Ok, so here I go – I’ve got me a human-level co-player for the low price of one phone call and several bottles of cheap alcoholic drink, by outsourcing the calculationally difficult part to a pre-made and pre-programmed wetware brain. Simpler that making a supercomputer to accomodate a full quantum-level brain EM, isn’t it?
I think a better anthropic argument against the simulation argument would be as follows:
Any universe, simulated within another universe, must have less capacity for computation than the parent universe. As such, we would expect it to have fewer minds in it (unless the simulated universe is optimized for mind-maximization, which ours does not appear to be). Any children of that universe would themselves have even fewer, going down exponentially. Since we are ourselves minds, we should therefore assign a high probability that we are not in a simulation.
I also feel as though any hypothesis that posits internally-acausal injection of minds into our own system should have a theodicy penalty applied to its prior…
Shouldn’t simulation get gradually harder as you go down the levels of recursion, not just hit a wall?
Shouldn’t the majority of universes then be ones where simulation is in principle possible but difficult enough that the number of civilizatons that bother falls below 1 per universe? Actual ground-level universes will be rare if simulating just a single universe at the ground-1 requires tiling the outer solar system with computronium.
Imagine what a world would be like if people had a handheld device that was an infinite UTM.
You have now simulated a world in which infinite UTMs exist. Your simulation has low resolution compared to other types of simulation, but that’s not important because it’s a matter of degree.
Now imagine a world in which no Turing machines can exist, but the concept exists, and people exist. Could one of those people imagine a world where infinite UTMs exist?
There is no ground level.
Cool thoughts! Interestingly, unlike the standard simulation argument, Carrol’s response requires a very specific version of the ‘typicality’ premise, namely Bostrom’s ‘Self Sampling Assumption’. This requires that when we treat ourselves like random samples we reason in such a way that we favor hypotheses where a greater proportion of *all* observers are such that they have *our* experiences. Another alternative is to reason in such a way that we favor hypotheses true of a greater number of observers with our experiences. (Reading ‘hypotheses’ as including ‘self-locating’ hypotheses, both of these entail the weaker principle that we are probably typical among observers that have our experiences.) This way of putting things is a bit loose, but more on these two ways of taking ourselves as random samples here: https://static1.squarespace.com/static/55d3621de4b07a4744ce4a23/t/55f91ca4e4b0fa2519ad7a73/1442389156509/OBARS.pdf
I’ve noticed that the Boltzmann brain literature is really inconsistent on which principle is being appealed to. Where theory T predicts lots of Boltzmann brains, sometimes the idea is that, given T, most observers with our experiences (at least our momentary experiences I guess) would be Boltzmann brains, so we should think we are BBs given T, which is crazy (on Moorean grounds?) so we’d better reject T. But sometimes the idea is that since T predicts lots of instantaneous BBs almost all of which are experientially *different* from us, we can safely reject T on typicality grounds. (Why are almost all of them different from us? Maybe most look out on the void or have totally unorganized experiences.)
Another thing about Carrol’s argument: if the reason it’s a bottom level is because of limited processing power, you’d have thought the number of sims might be limited too. How much harder is it to support simulations of simulations than to support simulations? Anyway it’s not obvious that a hierarchy will be pyramid-shaped.
By the way, Scott, I wanted to say I am enjoying Unsong immensely. And I have been known to make references to SSC in my critical thinking classes.
This requires that when we treat ourselves like random samples we reason in such a way that we favor hypotheses where a greater proportion of *all* observers are such that they have *our* experiences.
The amusing thing about this whole simulation argument is that it takes one of the standard objections to religion (“Oh come on, are you seriously telling me the entire universe has been created simply for the sake of one particular species on a rock in an obscure solar system in a corner of a galaxy out of innumerable galaxies?”) and answers “Yes” 🙂
More than that, that “self-sampling assumption” really does make the ‘gods’ “Bigger and Better versions of humans” – we are to assume that whoever is running this simulation (either our rich, powerful descendants or others) have similar interests, experiences, tastes and mental architecture, so that in the same way as our psychologists use rats for studies, they’re simulating us for their purposes.
“Oh come on, are you seriously telling me the entire universe has been created simply for the sake of one particular species on a rock in an obscure solar system in a corner of a galaxy out of innumerable galaxies?”
Well the simulationist can here take a gambit that would make theists a little uneasy, which is that there needn’t actually be anything going on in the far reaches of space– even virtually– until we ‘look’. (Berkeley got a very simple ontology out of this sort of move and tried to save the veracity of our experiences– and so the non-deceptiveness of the God who provides them–with a generous principle of interpretive charity according to which “there really are physical objects like tables” comes out true on his view, even though tables are just collections of experiences. Most theists did not buy this despite the elegant metaphysics.)
Well, if you’re going to run a simulation, it probably would be more efficient not to have huge swathes of the fake universe blank, you could have other civilisations running on other planets and compare’n’contrast to see if the dominant primates on Xigyz’zuk behave in the same way as the dominant primates on Earth, and if being dominant reptiles makes a change to behaviour 🙂
I know this assumes you have resources to burn, but if you can simulate an entire universe inna box, then you probably do have resources to burn. Maybe we’re all living six miles apart and we only think due to the strictures of the simulation that there are vast distances between star systems!
>If Carroll’s deconstruction of the simulation argument is right, then the more trouble we have explaining consciousness, the more that should push us to believe we’re in a ground-level simulation. There’s probably a higher-level version of physics in which consciousness makes sense. Our own consciousness is probably being run in a world that operates on that higher-level law.
I had a thought recently which went like this:
In the Matrix, I have a residual image; an emulated body that simulates my own body. I assume the Matrix works by turning off my physical body’s conscious signals to and from my brain, and hooks my brain into the “signals” of the emulated one in the matrix, such that if I stub my toe in the Matrix it feels to my brain like stubbing my physical toe.
With that in mind, what happens if you do brain surgery on avatar in the Matrix? Either my avatar has an emulated brain, with the prospect that consciousness arising from my physical brain would also result in consciousness arising in the emulated brain, meaning my avatar is an autonomous person I’ve somehow mind controlled (and thus Matrix-Neo can potentially get “desynced” from Real-Neo and run off and do his own thing), or else brain surgery ends up looking like this:
>Maybe an alien dissecting a fellow alien’s head would just find a perfectly featureless crystal with no internal structure, which is observed to inexplicably send nerve impulses to the rest of the entity’s body.
That’s pretty convincing. Probably all the brain surgeons in the matrix are agents and all humans are being nudged away from taking this career path.
Alternatively something in the pod kills/wipes/removes sections of your physical brain as you suffer damage within the simulation.
If this is a
dreamsimulation, then my totem should not spin forever.
Universes with high computing resources simulate universes with less (they could also simulate larger universes more slowly). So the real “ground level universe” is one where simulations are almost impossible due to ressource constraints. In this setup, most observers (per unit time) would not expect to be in the ground level universes, but scattered across the hierarchy of simulation. The greatest source of observers are universes with many extra computing ressources that decide to do observer-heavy simulations.
Nitpick. I disagree with Carroll’s assignment of “high level” and “low level”.
When we think of our own universe as a simulation, it may feel more natural to call the metaverse the “higher level”. But if we ran a simulation within our own universe, the virtual reality would also be called the “higher level”. It’s like those idiomatic jokes about how we “fill up” as well as “use up” a gas tank. Shouldn’t it be either “fill up and use down” or “fill down and use up”?
I think the reason for the collision is that we assign “lowness” to whichever thing feels most concrete at the time. Our own universe feels the most concrete and is therefore assigned the status of “lower level”. Both the metaverse and the virtual reality are hypothetical fictions, and therefore the metaverse is “higher” in one discussion while the virtual is “higher” in another discussion.
From a computer science perspective, I prefer the virtual universes be the higher level. The metaverse is the Base Universe since it’s ontologically-grounded. Virtual universes (like virtual machines) are higher up the stack than the one true Base Universe.
I don’t even know what it means for an actual universe to not permit UTM’s. All you need is arbitrary memory and arbitrary GOTO’s. In a universe without UTM’s, you’re basically telling me “pick only one”.
E.g. what does it mean for a universe to not have arbitrary repetition. Am I not allowed to jog more than a maximum of 4 miles? Must my every utterance be unique?
E.g. what does it mean to not branch. Do I not have free will? Does it imply Calvinism? Is the entirety of the universe a lone monoid?
E.g. What does it mean to not permit memory. Is this like Paradigm City in The Big O? 500 Days Of Summer? No writing utensils?
Congratulations on reinventing Cartesian dualism. Though of course the basic idea that ‘mind’ lives in some realm outside our universe is much older than Descartes himself. Christian theology is full of dualism: the spirit is willing but the flesh is weak, but on the other hand he who lusts within his heart is more guilty than the adulterer.
Me, I don’t buy it, I don’t believe that there’s another plane of existence where all the Really Important Stuff lives, I don’t believe in a disembodied ‘soul’. I’m surprised that someone who spends as much time with patients as you do can believe it. Don’t you often encounter people with physical or chemical disorders of the brain? Doesn’t this affect their ‘consciousness’?
>Christian theology is full of dualism: the spirit is willing but the flesh is weak, but on the other hand he who lusts within his heart is more guilty than the adulterer.
Pedantic note; Matthew 5 does not say the lusting is worse; merely equivalent. Or to put it another way, if one doesn’t commit adultery (perhaps because that person can’t get away with it practically) but they totally would if they could, they shouldn’t fool themselves about how moral they are.
I think Matthew 5 should be read taking it in conjunction with the woman taken in adultery; there were a lot of people very willing to judge others and drag them before a tribunal. Jesus is saying “And are you really so moral? Which of you has had lustful thoughts, or would have committed adultery if you had the opportunity? You don’t get to say you are better than the sinners simply on a technicality”:
Consciousness is a trait of our species. It exists because it works; in that it’s evolution has significantly contributed to our ability to survive and thrive. Our current understanding of consciousness is largely descriptive at this point in time, but that does not preclude a greater understanding in the future. Unknown does not equal unknowable.
The simulation conundrum is largely a semantics game. Depending upon how you define terms and what rules you impose, the answer can be whatever you preordain. If the universe is simply an abstract simulation, then reality has no meaning and science is a fools errand. There lies the fate of the followers of Heaven’s Gate and the end of that genetic lineage.
The distinction between ground level and higher level universes is not necessary. Even if any universe can contain simulations, it still becomes likely that any arbitrarily selected universe is a simulation.
The argument surely falls foul of Bayesianism? “By the terms of the simulation argument itself, most universes will be at ground level, since every high-level universe can simulate many ground-level ones. So (says the argument) we should expect to be at ground level.” I have read the actual Carroll post and that seems a fair summary. Most animals on earth are insects, so I should expect to be an insect. I count my limbs, I adjust my priors, I no longer expect to be an insect. The “expectation” that we should be ground level is equally easily displaced, and the argument vanishes.
The other half of the paradox fails too, because the simulation argument only requires that, looking at our universe, we can imagine universes a bit cleverer than ours which can do simulations, even if ours can’t.
I’m disappointed, Scott. You majored in philosophy. It’s bad grammar to call arguments true or false! They can be valid, invalid, sound, unsound, but not true or false.
Well, this is probably an inductive argument, and so invalid in an uninteresting sense and requiring somewhat less precise and less established descriptions (“strong” vs. “weak” has some popularity). But you are of course correct that “true” or “false” should not be among them.
You’re looking at this wrong. If you were a programmer of a universe, what would you do?
You’d optimize it.
To determine if you’re living in a simulation, check for bugs that come from optimization. If you’re not paying attention to something, it won’t have to simulated well, objects many light years away are more granular…