If it makes you feel any better, there’s a non-zero chance you’re in an echo of the past simulated by a future Skynet equivalent, and thus literally already are sent back in time right now - you just think it’s the present.
I built a machine to try and test that using Bell’s inequality (e.g. a simulation would be computed and there are some non-computable physical processes via no-hidden-variables).
Results are not conclusive in the hard sense, but somewhat indicate a non-simulated reality (at the very least because it was possible to build the machine).
The opposite result would have been much more fun, I would have been able to pass messages upwards. So of course I would Rickroll God.
The problem is this assumes the same physics for both the outer and inner worlds.
If anything, the behavior of quantizing continuous waves into discrete units such that state can be tracked around the interactions of free agents seems mighty similar to how procedural generation with continuous seed functions converts to voxels around observation/interaction in games with destructive or changeable world geometry like Minecraft or No Man’s Sky.
Perhaps the continued inability to seamlessly connect continuous macro models of world behavior like general relativity and low fidelity discrete behaviors around quantum mechanics is because the latter is an artifact of simulating the former under memory management constraints?
The assumption that possible emulation artifacts and side effects are computed or themselves present at the same fidelity threshold in the parent is a pretty extreme assumption. It’d be like being unable to recreate Minecraft within itself because of block size constraints and then concluding that it must therefore be the highest order reality.
Though I do suspect Bell’s inequality may eventually play a role in determining the opposite conclusion to the one you came to. Namely that after adding an additional separated layer of observation to the measurement of entangled pairs in the Wigner’s friend variation in Proietti, Experimental test of local observer-independence (2019), measured results were in conflict. This seems a lot like sync conflicts in netcode, and I’ve been curious if we’re in for some surprises regarding the rate at which conflicts grow as the experiment moves from just two layers of measurement by separated ‘observers’ to n layers. While the math should have it grow multiplicatively with unobserved intermediate layers still having conflicts which compound, the lazy programmer in me wonders if it will turn out to grow linearly as if the conflicts are only occurring in the last layer as a JIT computation.
So if we suddenly see headlines proposing some sort of holographic principle to explain linear growth in rates of disagreement between separate observers in QM, might be productive to keep in mind that’s exactly how a simulated system sweeping sync conflicts under the rug without actively rendering intermediate immeasurable steps for each relative user might work.
Turing machines can compute anything defined as an algorithm, and cannot compute anything that cannot be defined as an algorithm. This is why, for example, computers can’t generate random numbers (only deterministic streams of pseudorandom numbers using some starting seed). Also all Turing machines are equivalent – they can all run the same set of algorithms given sufficient memory and will produce the same result.
By Bell’s inequality, we know that certain events (I use quantum-tunneling) are non-deterministic and cannot be predicted by an algorithm (at better than chance, given infinite computing power, infinite time, and perfect knowledge of the system). Note though I’m an amateur quantum mechanic at best :D
Therefore if the universe is a simulation running on a Turing-machine, they would have to either halt, use pseudorandom numbers (which I can detect with finite but large CPU power and finite but large time), or sample their own random numbers from a local entropy source.
This way I try to minimize assumptions about physical laws in the Universe ‘upstairs’. One interesting property of this is that if the universe upstairs is also simulated, then if it samples local entropy it just passes the problem upward :D
I do work with the assumption that a Turing machine runs any simulation, Matrix-style. Not some underlying physical process that just so happens to simulate a Universe, and also put entropy in all the right places whenever I look.
This is all just for amusement though. If the Universe was really running on a Turing machine, we’d see way more ads (drink your ovaltine encoded in pi?). Also the current design is really suboptimal what with all the entropy. No way it would run for 13-point-whatever billion years. I refuse to believe that our hypothetical extradimensional programmers are simultaneously that smart and that dumb :P
What, a lazy programmer in me? I’ll have you know take pride in that lazy programmer! Just last week I helped a more junior dev avoid the evils of premature optimization thanks to it.
If it makes you feel any better, there’s a non-zero chance you’re in an echo of the past simulated by a future Skynet equivalent, and thus literally already are sent back in time right now - you just think it’s the present.
I built a machine to try and test that using Bell’s inequality (e.g. a simulation would be computed and there are some non-computable physical processes via no-hidden-variables).
Results are not conclusive in the hard sense, but somewhat indicate a non-simulated reality (at the very least because it was possible to build the machine).
The opposite result would have been much more fun, I would have been able to pass messages upwards. So of course I would Rickroll God.
The problem is this assumes the same physics for both the outer and inner worlds.
If anything, the behavior of quantizing continuous waves into discrete units such that state can be tracked around the interactions of free agents seems mighty similar to how procedural generation with continuous seed functions converts to voxels around observation/interaction in games with destructive or changeable world geometry like Minecraft or No Man’s Sky.
Perhaps the continued inability to seamlessly connect continuous macro models of world behavior like general relativity and low fidelity discrete behaviors around quantum mechanics is because the latter is an artifact of simulating the former under memory management constraints?
The assumption that possible emulation artifacts and side effects are computed or themselves present at the same fidelity threshold in the parent is a pretty extreme assumption. It’d be like being unable to recreate Minecraft within itself because of block size constraints and then concluding that it must therefore be the highest order reality.
Though I do suspect Bell’s inequality may eventually play a role in determining the opposite conclusion to the one you came to. Namely that after adding an additional separated layer of observation to the measurement of entangled pairs in the Wigner’s friend variation in Proietti, Experimental test of local observer-independence (2019), measured results were in conflict. This seems a lot like sync conflicts in netcode, and I’ve been curious if we’re in for some surprises regarding the rate at which conflicts grow as the experiment moves from just two layers of measurement by separated ‘observers’ to n layers. While the math should have it grow multiplicatively with unobserved intermediate layers still having conflicts which compound, the lazy programmer in me wonders if it will turn out to grow linearly as if the conflicts are only occurring in the last layer as a JIT computation.
So if we suddenly see headlines proposing some sort of holographic principle to explain linear growth in rates of disagreement between separate observers in QM, might be productive to keep in mind that’s exactly how a simulated system sweeping sync conflicts under the rug without actively rendering intermediate immeasurable steps for each relative user might work.
I took it from an information theory perspective:
This way I try to minimize assumptions about physical laws in the Universe ‘upstairs’. One interesting property of this is that if the universe upstairs is also simulated, then if it samples local entropy it just passes the problem upward :D
I do work with the assumption that a Turing machine runs any simulation, Matrix-style. Not some underlying physical process that just so happens to simulate a Universe, and also put entropy in all the right places whenever I look.
This is all just for amusement though. If the Universe was really running on a Turing machine, we’d see way more ads (drink your ovaltine encoded in pi?). Also the current design is really suboptimal what with all the entropy. No way it would run for 13-point-whatever billion years. I refuse to believe that our hypothetical extradimensional programmers are simultaneously that smart and that dumb :P
deleted by creator
What, a lazy programmer in me? I’ll have you know take pride in that lazy programmer! Just last week I helped a more junior dev avoid the evils of premature optimization thanks to it.
Lazy programmers are the best programmers. ;)