Simulation Hypothesis and Simulation Technology

The text has been inspired by the discussion “Turing Machine” on the everything list:

http://groups.google.com/group/everything-list/t/40e45a536974609d

Many people believe nowadays that the level of scientific knowledge allows us to conclude that everything including the whole universe could be simulated. Some of the them go further and say that the universe that we know is actually simulation (some sort of Matrix). Alternatively one can hear that there is no difference between a real world and simulation, hence “the universe is simulation” is just a tautology. The mind is no exception above and computationalism states that it could be simulated too. The latter though is not important below, as I will look at the simulation hypothesis from a practical side and I limit myself for the time being by simulation similar to that in the movie Matrix (actually even at much simpler level).

Simulation enjoys nowadays widespread use in engineering. I am personally working with finite elements (more specifically with ANSYS Multiphysics) and I can witness that today numerical simulation is a good business. Computer-aided engineering employs extensively numerical solution of partial differential equations for structural mechanics, heat transfer, electromagnetic, etc. Worldwide it makes about a billion per year and it continues to grow rapidly. I do not have numbers for molecular simulation but it seems to enjoy widespread use either. In nutshell, this means that simulation of equations derived by physicists allows engineers to make even more money. This in turn means that the equations should be correct and we actually have no evidence to doubt them.

Yet now comes but. I can also tell for sure that if someone speaks of simulation of the universe, then the current simulation technology just does not apply. Let us consider for example a hybrid vehicle (as compared with the Universe, this must be a trivial example).

In addition to components in a normal car, there is a battery, an electrical motor, power electronics, control units and it is makes more fun to simulate it. The question would be whether such a simple example could be simulated from the first principles as a whole system in the foreseeable future. The answer is damn simple: no way.

In order to understand that simple answer, let us first look at simulation practice. A hybrid car simulated as a whole is done at system level simulation. There are blocks that model behavior of different components. This helps an engineer at the design level to choose between different alternatives. Yet, the models in blocks are pretty simple. They are basically analytical models described when necessary through look-up tables.

On the other side, one finds in practice so called device level simulation where a piece of a car is simulated separately (motor, battery cell, power electronics). At this level, engineers usually treat one physical discipline per simulation. For example for a motor, engineers simulate separately structural mechanics, thermal management, electromechanics, etc.

Finally it is necessary to organize the transfer of results between different simulations. Let us consider this with an example of a battery. A battery converts a chemical energy to electrical one and vice versa. As we deal with chemical reactions, in principle even quantum chemistry is required to understand what is going on. Yet, at system level this is just a voltage source that in parallel generates heat. How one goes from the left to the right in practice? In industry this is called know-how. Definitely engineers can do it, after all they are paid for that.

Why then engineers do not even consider the simulation of the whole hybrid vehicle from the first principle? Could it be that they are unaware of the progress in computational science? No, everybody who has experience with modern simulation technology knows that such a task is just out of reach. What is more this is out of reach not only today but also in the foreseeable future. Under discussion nowadays is just some kind of automating knowledge transfer between different simulation levels and typical buzzwords are compact modeling, model reduction, and co-simulation.

Recently there was a news from the IBM:

Blue Waters Update — NCSA/IBM Joint Statement
http://www-03.ibm.com/systems/deepcomputing/qanda.html

Effective August 6, 2011, IBM terminated its contract with the University of Illinois to provide the supercomputer for the National Center for Supercomputing Applications’ Blue Waters project.

In my view this is consistent with what I write. The modern simulation technology seems to be in some dead end. Do not get me wrong. The simulation industry is booming, as one cannot even more imagine modern engineering without simulation, thereafter shares of simulation companies should be good investment.

Yet, the simulation technology falls short of the simulation hypothesis. It would be interesting to understand why. What is wrong? Equations? Numerics? Computer architecture? After all, these are small practical things that force us to reconsider our concepts.

After thoughts

12.09.11 21:07

> What about of dumb water molecules, can they not form a wave?
> Complex things can result from very simple rules, when you have a
> huge number of those simple things interacting with each other.

I will use this example to continue my thoughts about Simulation Hypothesis and Simulation Technology.

I will change the original question as follows. Can we simulate a wave starting from water molecules? I will consider it not in principle, but rather in the objective reality given to us in sensation. (This what I have learned in the USSR: Vladimir Il’ich Lenin: “Matter is the objective reality given to us in sensation”)

If we imagine brute-force simulation, then the answer is definite no. Even if we consider a level of molecular simulation when the water molecules are considered classically with a given force field, then it is definitely out of reach, also for foreseeable future. The Moore law just does not help.

In what sense then do we usually say “Yes, we can do it”? Presumably this means that we do not have to simulate each molecule to simulate a wave. The laws of continuum mechanics actually suffice. If we consider this numerically, then there is nice way to come to continuum mechanics through coarse-graining. One can think for example of dissipative particle dynamics (DPD, some equivalent of molecular dynamics) where we simulate not water molecules but rather bigger pseudo-particles. Funny enough DPD is pretty similar to smooth particle hydrodynamics (SPH), an alternative method to discretize the Navier-Stokes equations. In this sense a pseudo-particle is some equivalent of a cell in finite elements/finite volumes. In a way, molecular dynamics is also could be considered as a course-graining scheme. First we use quantum chemistry to evaluate the force field and then we use it at the next level.

In this sense, an interesting question is how simulation hypothesis is supposed to work. As brute-force simulation? Or along the second way?

Discussion:

http://groups.google.com/group/everything-list/t/311e746b43750acb

http://groups.google.com/group/everything-list/t/1dadda0d2b84f605

Comment by Bruno

http://groups.google.com/group/everything-list/msg/82dec952c838dd62

OK. Note that the mechanist hypothesis entails the falsity of the simulation hypothesis. If I am a machine, then the physical universe, actually any physical (and epistemological) things, CANNOT be Turing emulable. Mechanism entails the falsity of the digital physics assumption, and it is an open problem if it does not also entail also the falsity of Deustch Thesis (The thesis that physical things are emulable in polynomial time by a quantum computer). With mechanism we can only hope that the white rabbits are relatively rare, not that they are inexistent.

So I tend to agree with you. As far as I think that mechanism is plausible, I think that we cannot simulate most natural phenomenon. In particular we cannot simulate a brain, seen as a physical object, and that is why we have to choose a level of substitution, and hope our “computations” does not rely on a lower level. Mechanism is just the belief that there is such a truncation level, like we have good evidences that it exists for all organs of the body. There are many strong evidence that indeed biology, by its fuzzy redundancy, does exploit a lot the mechanist truncation of information.

I insist on this: mechanism is the less reductionist hypothesis ever proposed in the human and exact sciences, and it makes almost everything concrete non Turing emulable, except oneself. So, despite many confusions on this, mechanism is almost the opposite of the simulation hypothesis. When you will study the UD theorem, you should understand this by yourself.


Posted

in

by

Tags: