• tiny_electron@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    6 months ago

    There is a cieling though. A computer made of matter of one universe cannot simulate an entire universe at the same speed. It’s like installing a VM on a computer: the VM is always slower. Each layer would then become exponentially slower with a limit of 0 speed.

    Having said that, combined with the fact that our Universe is 13B years old, it would make the age of our root universe exponentially larger than 13B years.

    It could maybe feasible if we live in the first layers, but beyond that our root universe would have died from Heat death long ago.

    • 31337@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      6 months ago

      The simulations could be imperfect simulations. So, each nested simulation would lose fidelity, simulate a smaller universe, or simulate a universe with less life. I think one hypothesis I’ve heard is that wave functions are an approximation, and the simulation only fully simulates particles when they are observed. Kinda like how games do level-of-detail optimizations when you are further away from objects.

      Edit: Another possibility is that nothing says the simulation we’re in started at the beginning of the universe, it could’ve just been given initial conditions and started yesterday for all we know.

      I don’t know if we are in a simulation, but I think it’s plausible. I think a God (at least of the religions I know of) is implausible, but possible. I kinda like the many-worlds hypothesis better than simulation theory, but I guess they’re not exclusive.

      • tiny_electron@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        There are indeed ways you could make it work, but then you add more hypothesis and thus the cost of the simulation hypothesis increases.

        Optimizations are indeed necessary, but just like the player is something special in a game, the observer would need to have a special status in the universe. I don’t like this idea because the history of science always moved in the direction of making us the observers less and less special.

        Moreover if life spreads in the universe, the simulation would encounter a scaling issue with an exponential growth of the numbers of observers.

        I agree with you that in the end we just don’t know, it’s fun to push ideas to their limits!

    • m0darn
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      There is a ceiling though. A computer made of matter of one universe cannot simulate an entire universe at the same speed.

      Right but we don’t know what the real universe’s limitations are, and I’m geostationary to speak too authoritatively of the capabilities of an arbitrarily advanced civilization.

      I don’t think simulation theory is true. Eg calculating gravitational forces between everything in the universe would presumably be extraordinarily cost intensive, but essentially irrelevant (I mean like gravitational waves, not the moon).

      • tiny_electron@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        Even though our knowledge of physics is incomplete, a VM running a faster simulation of its container would be a paradox. You could stack successive layers of reality that would go faster and faster reaching eventually Infinite processing speed, allowing the computer from the root layer to perform an Infinite amount of computation in a finite time.

        You may say that this could be possible as our understanding of physics is lacking. And that’s fine! But I think this paradox shows that the VM can only run slower than reality