I recently read the paper Are you living in a computer simulation? (pdf) by Nick Bostrom. A draft version of the paper appeared in 2001 and the paper was finally published in 2003. I mention this because today the paper will be immediately familiar to a lot more people than it would have been three years ago. Thats because three years ago we didn’t have the Matrix trilogy to spread the concept among the masses.
The gist of the paper is very similar to the main thesis of Matrix. In the Matrix, the entire human population was simply living their lives in a simulation. The paper is a slightly more formal presentation of a similar concept, except that the paper actually tries to make a convincing argument that at least one of the following three propositions is true:
- we will never reach a stage where we will be able to simulate our own ancestors (i.e., we’ll become extinct before that)
- even if we do attain the technical prowess to run such simulations, we won’t be interested in running significant number of such simulations
- we are almost certainly living in a computer simulation.
You can read the paper for full details, but in brief the argument goes something like this. Lets say we do become sufficiently technologically advanced to run planetary scale simulations of the entire human race, and we are interested in running such simulations. In that case its highly likely that we will run full scale simulations of our ancestors. All things being equal, we have no reason to believe that our particular existence is special in any sense — that is, we have no reason to believe that it is our particular lineage/evolution that will lead to all the advances that will lead to all the simulations. In particular, if we believe that there will be a large number of simulations, then it is just as likely that we are ourselves part of some such simulation.
The paper itself is very interesting and open to all sorts of debate. In this post however, I just want to focus on one particular aspect. Nick talks about the possibility of running simulations within simulations. That is, lets say humans develop some ultra gigantic computers to run large scale fine grained simulations of their ancestors. Now, within this simulation, given enough time and compute power, eventually the simulated ancestors themselves will develop their simulated version of the giant computers within the original simulation. In contemporary parlance, these would be called virtual machines. Not only that, they would be very special kind of virtual machines, because they are recursively virtualizable. That is, if the simulation proceeded indefinitely, then the simulated ancestors will start running their own simulations, within which the simulated simulated ancestors eventually will start running their own simulations and so on.
Now, the notion of recursively virtualizable platforms is slightly hard to grasp and even harder to formulate. I detected a slight (perhaps unintended and merely technical) contradiction in the paper. Nick argues that if we are indeed living in a simulation and have no way of “looking outside the box”, we have no way of determining what the natural laws look like in the “real” world. That is, it might very well be the case that the real Universe (in which the simulation in which we are simulated, is running) is governed by laws that we have no clue about, simply because we can’t observe that Universe. However, this scenario rules out the possibility of recursive virtualization. For simulated ancestors to be able to run their own simulations, we require that the world observed by/exposed to the original (or base) simulation is identical to the world observed by all nested simulations. Because if not, then the nested simulation will NOT be identical to the original simulation, violating the rules of the simulation itself.
What do you think?