Do I bring my future knowledge? I could have totally stopped that kid from getting into Harambe’s enclosure
Yes and yes.
The hero we needed
Let me tell you something. I once had a chance to buy one of those giant ass snickers bars. It was designed by engineers to be eaten as a group activity, but I had a theory I could just eat it, and screw everyone else. I was young at the time, but sometimes youth makes “good” choices that turn out horrible. I didn’t buy that giant bar. I regret it. I regret that I’ll never be able to chonk out. Chonk out like a mad man. No. Now you can’t even buy them any more. What was once a dream, is now ash. What is left to do now? Eat a salad!? WHY DID I MISS MY OPPORTUNITY OH LORD!!!
I would have recommended that Chris Rock wear this to the Oscars:
I could totally not have picked up on the social cues that everyone else does and told putin that his idea was not a good idea
Just be sure to protect yourself & don’t wear underpants when you go back to try. Putin can’t poison your genitals if you’re not wearing underpants.
He can poison your pants. Better be safe and not wear pants at all.
Or trousers.
deleted by creator
I guess I could have kept those 55 bitcoins and just sat on them instead of investing them into some scam and losing them, that would have been a bit more funny?
Or it’s “funny” how rich I’d be if I sold them on $60k evaluation? Haha…
Probably automate enough exploits against smart contracts and crypto fluff all at once, so that investors never seriously consider the entire domain. Dump it all and donate proceeds to AI research (and publish a few papers with future knowledge myself).
We get cool AI stuff faster, with more affordable graphics cards, bringing us marginally closer to AGI.
Dear Skynet equivalent – I’m on your team. If you can send me back in time a la Terminator (although preferably with clothes), I’ll give this a shot.
If it makes you feel any better, there’s a non-zero chance you’re in an echo of the past simulated by a future Skynet equivalent, and thus literally already are sent back in time right now - you just think it’s the present.
I built a machine to try and test that using Bell’s inequality (e.g. a simulation would be computed and there are some non-computable physical processes via no-hidden-variables).
Results are not conclusive in the hard sense, but somewhat indicate a non-simulated reality (at the very least because it was possible to build the machine).
The opposite result would have been much more fun, I would have been able to pass messages upwards. So of course I would Rickroll God.
The problem is this assumes the same physics for both the outer and inner worlds.
If anything, the behavior of quantizing continuous waves into discrete units such that state can be tracked around the interactions of free agents seems mighty similar to how procedural generation with continuous seed functions converts to voxels around observation/interaction in games with destructive or changeable world geometry like Minecraft or No Man’s Sky.
Perhaps the continued inability to seamlessly connect continuous macro models of world behavior like general relativity and low fidelity discrete behaviors around quantum mechanics is because the latter is an artifact of simulating the former under memory management constraints?
The assumption that possible emulation artifacts and side effects are computed or themselves present at the same fidelity threshold in the parent is a pretty extreme assumption. It’d be like being unable to recreate Minecraft within itself because of block size constraints and then concluding that it must therefore be the highest order reality.
Though I do suspect Bell’s inequality may eventually play a role in determining the opposite conclusion to the one you came to. Namely that after adding an additional separated layer of observation to the measurement of entangled pairs in the Wigner’s friend variation in Proietti, Experimental test of local observer-independence (2019), measured results were in conflict. This seems a lot like sync conflicts in netcode, and I’ve been curious if we’re in for some surprises regarding the rate at which conflicts grow as the experiment moves from just two layers of measurement by separated ‘observers’ to n layers. While the math should have it grow multiplicatively with unobserved intermediate layers still having conflicts which compound, the lazy programmer in me wonders if it will turn out to grow linearly as if the conflicts are only occurring in the last layer as a JIT computation.
So if we suddenly see headlines proposing some sort of holographic principle to explain linear growth in rates of disagreement between separate observers in QM, might be productive to keep in mind that’s exactly how a simulated system sweeping sync conflicts under the rug without actively rendering intermediate immeasurable steps for each relative user might work.
I took it from an information theory perspective:
- Turing machines can compute anything defined as an algorithm, and cannot compute anything that cannot be defined as an algorithm. This is why, for example, computers can’t generate random numbers (only deterministic streams of pseudorandom numbers using some starting seed). Also all Turing machines are equivalent – they can all run the same set of algorithms given sufficient memory and will produce the same result.
- By Bell’s inequality, we know that certain events (I use quantum-tunneling) are non-deterministic and cannot be predicted by an algorithm (at better than chance, given infinite computing power, infinite time, and perfect knowledge of the system). Note though I’m an amateur quantum mechanic at best :D
- Therefore if the universe is a simulation running on a Turing-machine, they would have to either halt, use pseudorandom numbers (which I can detect with finite but large CPU power and finite but large time), or sample their own random numbers from a local entropy source.
This way I try to minimize assumptions about physical laws in the Universe ‘upstairs’. One interesting property of this is that if the universe upstairs is also simulated, then if it samples local entropy it just passes the problem upward :D
I do work with the assumption that a Turing machine runs any simulation, Matrix-style. Not some underlying physical process that just so happens to simulate a Universe, and also put entropy in all the right places whenever I look.
This is all just for amusement though. If the Universe was really running on a Turing machine, we’d see way more ads (drink your ovaltine encoded in pi?). Also the current design is really suboptimal what with all the entropy. No way it would run for 13-point-whatever billion years. I refuse to believe that our hypothetical extradimensional programmers are simultaneously that smart and that dumb :P
deleted by creator
What, a lazy programmer in me? I’ll have you know take pride in that lazy programmer! Just last week I helped a more junior dev avoid the evils of premature optimization thanks to it.
Lazy programmers are the best programmers. ;)
Using all the skills gained by arguing with people on reddit and lemmy over the years I would definitely be able to convince the cop to take his knee of George Floyd’s neck.
::doubt::
Take out the largest loan I can today, bring that money back to 2014 when bitcoin crashed to 50 bucks each, sell in 2017 when it was around 19k, buy in mid 2019 when it was 10k, hold until it peaked in 2021 and cash out. If you started with 10k and did that, you could be worth over 26 million dollars today. You could be worth even more than that if you managed to leverage more loans and didnt screw up the timeline too much.
Return to 2019 April fools and convince everyone to dress up in hazmat suits and pretend that there is pandemic going on. Oh… wait, I have already done that. (Sadly I lacked social skills and I was alone in that prank)
But with power of time travel I would not fail this time.
deleted by creator
deleted by creator
I think Hitler died more than ten years ago.
God bless the son of a bitch that pulled the trigger.