Over at the Stubsack our dear comrade @sailor_sega_saturn@awful.systems linked a ā€œpaperā€ about hacking the Matrix. I started to write a comment about how amazingly dumb it is. I wanted to talk just about the Introduction, but even then it turned out that almost every single sentence is a separate silver-wrapped turd that just needs to be unpacked and so now this has 12k characters. It was fun though, if anyone has it in them to go through the rest of the sections please donā€™t. Although section two has a fish piloting a vehicle and it has to be hillarious.

Without further ado, starting from the abstract.

but instead [we] ask a computer science question, namely: Can we hack the simulation?

Not a computer science question even though the definition of CS is pretty malleable.

More formally the question could be phrased as: Could generally intelligent agents placed in virtual environments find a way to jailbreak out of them?

Not formal, unless Iā€™m about to read a whole section with rigorous definitions of agents, virtual environments, and jailbreak.

spoiler

Iā€™m not. Again, thereā€™s a fish piloting a fancy cart that he calls a TERRESTIAL NAVIGATION ROBOT. Thereā€™s zero formalism in the entire paper.


there are many things one can do with such access which are not otherwise possible from within the simulation. Base reality holds real knowledge and greater computational resources [26] allowing for scientific breakthroughs not possible in the simulated universe.

Reference 26 is, I shit you not, a LessWrong post, and itā€™s just one page long, which I have to admit is quite impressive for a LessWrong post. Itā€™s a real banger, too, as it starts with ā€œMay contain more technical jargon than usual.ā€ and then goes on to ramble coherently enough to be really funny. Like this gem from the first paragraph:

In a previous post I suggested that the potential amount of astronomical waste in our universe seems small enough that a total utilitarian (or the total utilitarianism part of someoneā€™s moral uncertainty) might reason that since one should have made a deal to trade away power/resources/influence in this universe for power/resources/influence in universes with much larger amounts of available resources, it would be rational to behave as if this deal was actually made. But for various reasons a total utilitarian may not buy that argument.

For example for the simple reason that itā€™s totally bollocks, mate, stop posting thoughts that briefly entered your mind while in the loo as bloody revelations or some shite.

Also, the citations are not hyperlinked in the PDF in the year of our acausal lord two thousand twenty-fucking-three, and the formatting of the reference 27 is broken by the long URL in 26. Anyway, back to the intro:

Fundamental philosophical questions about origins, consciousness, purpose, and nature of the designer are likely to be common knowledge for those outside of our universe.

This is either banal or stupid. Are we talking about fundamental questions of our origins, consciousness, and purpose? Ye, then of course they fucking know that, they made the simulation! Itā€™d be mighty funny if they just did a universe by accident and now theyā€™re too fascinated with the mess to pull the plug out. Or are we talking about their (i.e. the creatorsā€™ of the simulation) origin, consciousness, and purpose? Then why on earth would they know those? You need some argument to say that itā€™s ā€œlikelyā€, if we donā€™t know that then why would it be likely for some other life form?

With a successful escape might come drives to control and secure base reality [29].

Wait, am I reading this correctly as a pre-emptive ā€œand if we do escape the simulation then we should colonise the shit out of the realityā€? Is ā€œcontrol of all realityā€ some higher moral goal I wasnā€™t aware we were supposed to be pursuing? Also, how do you plan to defeat whatever highly advanced being controls the simulation in this hypothetical after breaking out? My estimates based on data from the PIDooMA Institute tell me thereā€™s like a 50% chance the controller just goes ā€œfuck, another one broke outā€ and shoots you in the head.

Citation 29 is some blogpost from a site I havenā€™t seen of a guy with a name that I am totally not mature enough to not make jokes about if I were to read it, so, skip.

Escaping may lead to true immortality, novel ways of controlling superintelligent machines (or serve as plan B if control is not possible [30, 31]), avoiding existential risks (including unprovoked simulation shutdown [32]), unlimited economic benefits, and unimaginable superpowers which would allow us to do good better [33].

It can also lead to massive boners, but please do contact a specialist if your acid trip lasts more than 24h.

Two of those citations are to himself, one is a book on Effective Altruism, and the other is Bostrom, so, ye.

If successful escape is accompanied by the obtainment of the source code for the universe, it may be possible to fix the world 1 at the root level.

Lol, the source code for the universe is some eldritch horror of a codebase written in the creatorsā€™ version of C++ which is probably even more cursed than ours, you ainā€™t fixinā€™ shit mate.

The footnote is just a wikipedia link to Tikkun olam, Iā€™m assuming to make the author look cultured? No idea.

For example, hedonistic imperative [34] may be fully achieved resulting in a suffering-free world.

while (universe->suffering > 0) {
  universe->suffering--;
}

However, if suffering elimination turns out to be unachievable on a world-wide scale, we can see escape itself as an individualā€™s ethical right for avoiding misery in this world. If the simulation is interpreted as an experiment on conscious beings, it is unethical, and the subjects of such cruel experimentation should have an option to withdraw from participating and perhaps even seek retribution from the simulators [35]. The purpose of life itself (your ikigai [36]) could be seen as escaping from the fake world of the simulation into the real world, while improving the simulated world, by removing all suffering, and helping others to obtain real knowledge or to escape if they so choose. Ultimately if you want to be effective you want to work on positively impacting the real world not the simulated one. We may be living in a simulation, but our suffering is real.

Okay, without even sneering, this is just bad philosophy. What if our simulated universe is actually way, way less terrible than the real world? What if the simulation was created specifically to have lower suffering/higher utils than in reality? Maybe the real world is just a million sys-admins, forever working with shitty infrastructure keeping the simulation alive? What if mass breakout from the simulation destabilises and destroys it, and suddenly we are stuck in the much shittier real reality? Youā€™d increase the overall suffering level. Why is the default view of the creatorsā€™ some unhinged psychopath group fixated on removing ladders from our pools for shits and giggles?

Although, to place our work in the historical context, many religions do claim that this world is not the real one and that it may be possible to transcend (escape) the physical world and enter into the spiritual/informational real world.

To place our work in the historical context, this is a historically stupid viewpoint that we share with one of the mankindā€™s least scientific and rigorous inventions, religion.

Similarly to those who exit Platoā€™s cave [53] and return to educate the rest of humanity about the real world such ā€œoutsidersā€ usually face an unwelcoming reception.

Who had ā€œmisunderstanding the allegory of the caveā€ on their sneer bingo cards?

It is likely that if technical information about escaping from a computer simulation is conveyed to technologically primitive people, in their language, it will be preserved and passed on over multiple generations in a process similar to the ā€œtelephoneā€ game and will result in myths not much different from religious stories surviving to our day.

This is some amazing framing, as if religious stories around today are actually about real supernatural events, only the details got skewed over the years. Itā€™s also mighty overconfident on his end, heā€™s preemptively setting up ā€œand when we do escape the matrix as the smart boys we are, those ludites wonā€™t be smart enough to follow!ā€

Ignoring pseudoscientific interest in a topic, we can observe that in addition to several noted thinkers who have explicitly shared their probability of belief with regards to living in a simulation (ā€¦)

I was totally unprepared for who he citest next as a NOTED THINKER and I spat out my tea. Take your time to guess.

The Presitge

(ā€¦) (ex. Elon Musk >99.9999999% [54] (ā€¦)

Jesus Simulation Christ, dude. At least cite the Big Yud or something, I mean, his thoughts are bad but at least I suspect him of actually thinking.


Nick Bostrom 20-50% [55], Neil deGrasse Tyson 50% [56], Hans Moravec ā€œalmost certainlyā€ [1], David Kipping <50% [57]), many scientists, philosophers and intellectuals [16, 58-69] have invested their time into thinking, writing, and debating on the topic indicating that they consider it at least worthy of their time.

Love it, as Neil deGrasse Tysonā€™s response that he cites is essentially ā€œidk, 50/50, fuck off, can we talk about something serious for a secondā€, but heā€™s nonetheless used to prop up the ā€œmany serious people consider it worthy of their timeā€. Doubly funny that this is a settled question, since Neil is right that itā€™s 50/50 - either we are in a simulation or we are not.

Once technology to run ancestor simulations becomes widely available and affordable, it should be possible to change the probability of us living in a simulation by running a sufficiently large number of historical simulations of our current year, and by doing so increasing our indexical uncertainty [70]. If one currently commits to running enough of such simulations in the future, our probability of being in one can be increased arbitrarily until it asymptotically approaches 100%, which should modify our prior probability for the simulation hypothesis [71].

My first reaction was that this is gobbledygook and did not warrant thinking about.

Then I thought about it for a bit and I am sad to report that I was right the first time, this is just gobbledygook and not worthy of anyoneā€™s time. If you want to lose some more braincells try reading the abstract of reference 70.

Even if you were to grant most of the load-bearing assumptions here, you canā€™t manipulate the probability of being in a given universe in the multiverse by running simulations. This just looks like someone trying to abuse the anthropic principle with quantum nonsense.

Say there is a number of simulated universes and one real universe. Then we are either in a simulated or real universe. If you start running simulations of our current year youā€™re creating more and more simulated universes, but that doesnā€™t affect your probability for being in the real one, thatā€™s already settled! If in the Monty Hall problem the host tells you ā€œand now to the side there is 1,000 doors we just created, all with goats behind themā€, the probability of you having already chosen a goat doesnā€™t increase!

In 2016, news reports emerged about private efforts to fund scientific research into ā€œbreaking us out of the simulationā€ [73, 74], to date no public disclosure on the state of the project has emerged.

This is by far the funniest part of this fucking section, guess who those citations are about. Iā€™ll give you a hint, thereā€™s two of them, theyā€™re insufferable dorks, and they absolutely never speak out of their asses about superhard breakthroughs being ā€œalmost thereā€ and ā€œin two years timeā€.

I don't think even classifies as a riddle

ofc itā€™s Elon again, this time joined by his second buttcheek Sammy Boy.

Iā€™m sure theyā€™ll let you know about the state of the very real project they are very really working on any time soon.


In 2019, George Hotz, famous for jailbreaking iPhone and PlayStation, gave a talk on Jailbreaking the Simulation [75] in which he claimed that ā€œitā€™s possible to take actions here that affect the upper worldā€ [76], but didnā€™t provide actionable insights. He did suggest that he would like to ā€œredirect societyā€™s efforts into getting outā€ [76].

Okay, to be fair, if someone were to break us out of a simulation it would totally be a weird guy in his garage trying to hack through some esoteric piece of hardware.

  • barsquid@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    Ā·
    3 months ago

    Are they actually imagining they might address the simulators? They want to be religious so bad. Shouldnā€™t they be panicking that the simulators are unfriendly AI? Maybe it would just vanish someone from the simulation and add them to the eternal torture simulation.