Steve Yegge has written a series of posts on A programmer’s view on the universe. I recommend both part 1 and part 2 of the series, although this post is a reaction to the most recent part: Mario Kart.
In the second post, Steve makes a very interesting argument about the limits of embedded systems, and especially about breaking those limits. He then compares the embedded systems he’s talked about to the world we live in, making a very good (though somewhat unclear) point.
My biggest problem with the whole post is that at the center of it is a metaphor or example: Mario Kart. Steve uses the Mario Kart example (and indeed, most games) to illustrate the boundaries of a simulation: The Invisible Wall. Anyone who’s played games more than casually knows about invisible walls.
The whole problem is that the invisible wall is not the borderline of the simulation, and there’s a perfectly viable “something” on the other side. Game characters generally can’t get there, but that doesn’t mean the world doesn’t exist on that side of the wall, just like my neighbour’s apartment doesn’t become undefined just because he locks the door, preventing me from entering. Indeed, breaking the invisible walls in games is a favourite pastime of cheaters in some online FPS games, and of explorers in MMO games. It works because the simulation still exists there.
Take it from an insider — invisible walls are placed by designers to direct the flow of where the player can go, to restrict the player into a zone to which design and art efforts can be focused. In development, we sometimes turn off our invisible walls to be able to test something or take a screenshot somewhere you’re normally not allowed to go. They’re not an absolute part of the simulation (which tends to run on infinitely in every direction), but rather something put in to make the simulation into a game. An infinite stretch of nothingness probably isn’t much fun.
I guess that goes to show how you need to be careful when you select your metaphors.
Like I mentioned, the point he’s making is still very interesting, especially in connection to simulations of intelligence, which is my own field of work. What happens when you simulate intelligence, and fail to completely close the border to the simulation? I guess that depends on how strong the intelligence is that you’re simulating.
I’ll get back to that, but first let me tell you about something called the Simulation Argument (or Simulation hypothesis as wikipedia calls it). Simply explained, if we assume that we’d learn to simulate consciousness on a computer then it’s reasonable to conclude from this that we’d be performing such simulations a whole lot for various reasons. If that would be possible, the amount of simulated consciousnesses (is that even a word?) would vastly outnumber the amount of “real” consciousnesses… which means the probability that you and me are simulated rather than “real” approaches 100%. And of course, noone can ever claim to be “real”, there may always be another layer of simulation.
This is a scary thought, since it means a whole lot of things that all boil down to the two facts that we’re not unique and certainly not in control of our own lives. It’s also a thought that makes great Sci-Fi, that gets touched on in movies like The Matrix or TRON. It poses interesting questions of “which one is the real world”. The recursive problem inherent in that is somewhat illustrated by eXistenZ, to stay on the Sci-Fi movie theme.
Anyway, relating the whole thing to Steve’s post about “the great undefined” beyond the limits of a simulation, what would happen if you gave a society of simulated consciousnesses the ability to break out of their simulation? Or even if you left a bug somewhere which led to them being able to outsmart you and break through the barrier?
Steven already answered that: Segmentation Fault. BSOD. Invalid Memory Access. An Exception? Or maybe some minor fault in the host system causing unpredictable behaviour. In short, “undefined”. It even happened to a TRON game for some guys. Now what happens if the Simulation Argument is true, and we’re all really living inside a simulation? What if scientists at some point figure out how to break past our own borders? To measure time before the big bang? To measure heat below the absolute zero? To hack our simulation’s limits? If it’s there to poke, you just know someone’s bound to poke it.
We could crash our own universe. Now that is a scary thought.