Science Feature: Dust Theory

Posted Monday, May 23rd, 2011 11:01 am GMT -4 by

”DustIn Greg Egan’s ‘Permutation City’ Paul Durham copies his consciousness into a computing substrate. While technically awesome, this may seem to you conceptually quite mundane: OK, Paul now runs on a computer. But you are mistaken. ‘Permutation City’ explores a chain of ever-more-esoteric paradigms to arrive at the ultimate counter-intuitive reality: Dust Theory.

Dust theory originated in the obscure musings of philosophers studying Artificial Intelligence back in the 1980s. I remember reading papers where they considered the case of leaves blown by the wind into the configuration ’2 + 2 =4′. “Is computation going on there?” they wondered. From this small beginning grew the concept that copies of your personality are realised right now throughout the universe.

Computer simulation of brain functioning is coming along apace. The “Blue Brain” project is modelling brain neurons on a supercomputer and talks optimistically about simulating an entire human brain in a decade. Suppose in a few decades time we can indeed simulate the ten billion neurons and one thousand trillion synaptic connections in the average human brain. You decide to sign-up and submit to the scanning process which produces a ‘perturbation’ of the canonical human brain model. The deltas are duly fed in and your consciousness is now running on a computer. What does it feel like?

By definition it can feel no different. Suppose the Upload Company provides a simulated environment interacting realistically with your simulated body – a kind of super computer game. You wake up in a version of your own house and wander around the rooms. Your thoughts and feelings exactly duplicate those of protoplasmic-you in your original dwelling.

A computer works by loading a data structure, or pattern, into memory and then updating it in discrete steps. What seems like a continuous flow of thinking to your simulated-self is, at the microsecond-scale, a series of distinct updates to individual simulated neurons. Of course, the same thing is true in our brains as well. Neurons either fire or they don’t and all thinking and feeling is the result of trillions of these discrete neuronal-synaptic messaging events. Are you still doubtful? Suppose we replaced exactly one neuron in your physical brain with a miniaturised computational state-machine which exactly mimicked the biological neuron’s input-output behaviour. How would your brain behave differently? (Not at all). Now extrapolate to replacing all the neurons in your brain in the same way.

Let’s run your computer simulation for a minute. You reach for your coffee, take a sip, lean back and point your screen at sciencefiction.com. Life is good, you feel. There. Sixty seconds and we … stop. The simulation in the computer advances one step, or time slice, every 6 microseconds. This is easily fast enough to correctly simulate biological neurons, which operate much more slowly. As the simulation advances during that minute, we write out each discrete neuron-state – plus the rest of the simulation-state, onto a vast backing store. How many states did we save? How many 6 microsecond slices are there in a minute? The answer is ten million. Each simulation-slice is a complex sequence of binary ones and zeros like all computer data; a table listing the input-output state of each of the ten billion neurons in your simulated brain at that moment plus the state of your virtual body plus the state of the environment. That’s just what a slice of a computer simulation actually is.

Now that we have those 10 million slices, we don’t have to use the complex program which computed each slice from the previous one. Our 10 million slice database is like a reel of frames from a futuristic virtual-reality movie. If we simply load each slice into the computer every 6 microseconds, the simulation runs as before – you reach for your coffee, take a sip, lean back, point your screen to sciencefiction.com and feel that life is good.

You may feel that something essential was lost when we moved from next-state neuronal computation to just loading successive brain-neuron states from backing-store, so ask yourself: ‘How do your new computational-neurons decide how to behave?’ First time round they have to run their computational-neuron program on their inputs to correctly compute their outputs at the next time slice. But if we record all the inputs and outputs, the second time around each computational-neuron can simply access the look-up table to see how it should behave in the next time slice. The totality of all the inter-neuron synaptic messaging is exactly the same in both cases and equally replicate prior biological brain activity. The “insides” of the neuron don’t matter at all; the neuron is, in the jargon, a black box*.

Now it starts to get seriously weird. By running the simulation in a computer, we have decoupled the ‘reality’ of your simulated brain, your simulated body and your simulated home from the laws of physics. We can do things we could never do in our own reality. If we run the simulation faster or slower (one time slice every hour?) a little thought will show that it makes no difference to your experience. What about if we run the slices backwards, or out-of-order? Since each slice is a self-contained entity which is structurally independent of any other, then it will not matter in what order the slices are run: you still have the same subjective experience of sipping your coffee and reviewing your favourite Internet site.

OK, now a big one. What ‘value do we add’ by running the slices at all? After all, they already exist in the flash memory or computer disk backing store – all of them. Simply pulling slices into computer working memory, one after another, may help us make sense of the simulation: it’s brought into time-congruence with our own linear experience in the physical world. But it can make no difference to the simulation itself. Just by having all the ten million slices on the disk backing store we have somehow smeared a minute of your time into pure space. It’s hard for us to imagine that, on that disk, you are – ‘in parallel’ – having that one minute experience, but you must be.

So now we are in sight of the promised Dust Theory. What’s so special about any particular simulation-slice resident on disk or flash? It’s just a pattern of magnetism on the disk surface or floating-gate transistor settings in flash. Although we didn’t labor the point, when a slice gets transferred to the computer its physical form changes several times anyway: first into a sequence of electromagnetic pulses on the connecting cables, then into some physical structure in computer RAM. Geographical position and physical encoding vary, yet the pattern is the same. If we had run the simulation on a global cluster of computers, with one slice in England and the next loaded onto a computer in California, the simulation would have worked just the same. So why do we need a computer at all?

The universe is a big place with a lot of material in it, often structured in complex patterns. Suppose that all over the universe there were patterns of material which, just by chance, were precise encodings of the ten million slices of your one minute in your home. Then by their very existence you would have that coffee and surfing experience. Biological you and I would never know that – to us it all just looks like random piles of dust – but simulated-you would nevertheless be there, having that experience. The universe is a truly big place, and complex. Probably every pattern of any complexity is out there in the dust somewhere. There are collections of patterns which exactly mirror the pattern of your neurons over all the lives you could ever lead. Even as you read this, there are many, many versions of you in the universe, encoded as simulations in the dust, and unaware that they are simulations. Perhaps you are one of those simulations – you could never disprove a sufficiently accurate one.

That’s Dust Theory as Greg Egan used it in Permutation City. He followed up with an essay on Dust Theory here. In physics there is a related concept: the Boltzmann Brain.

Dust Theory has proven quite controversial: if you see a flaw in the above argument please write to me using the comment box below.

* Note: if you are still worried, it’s a well-known result in computing that you can move complexity from a program into its data. The program then becomes a simple interpreter which applies a rule-base to a data-base, updating either or both. In our physical world, the role of interpreter is played by the laws of physics which uniformly animate the processes of mechanics, dynamics, chemistry, biology and brains. There can be nothing intrinsically special in the ‘next-state’ function.

  • Tom

    Does our consciousness reside in the successive states (simulation slice), or does it reside in the state transitions as we move from state to state? (or even some combination of the two?)
    For me the act of thinking (naively) seems more like state transitions (or the calculation of the next state from the current state + inputs), rather than the static states themselves.

    • Nobody knows for sure, but as I tried to indicate in the article, the state-transition function can be very, very simple (see note at end). In real life, the state-transition function is implemented via the continuous motion of molecules in the neurons and synapses of the brain. Since these are common to ALL phenomena it seems pointless to look for the uniqueness of consciousness there.

      However, no-one knows how the sense of a ‘self embedded in flowing time’ is constructed by the brain so I try not to be dogmatic about it..

  • Pingback: Italian Plumbers and Perfection in Non-Linear Time - GotoTech.com()

  • HH

    The argument is not particularly flawed; however, the intuitions (“feelings”) regarding it almost always are; and notions of “identity”, which are very relevant here. There’s an unsolved problem related to this all, which, for nearly everybody, isn’t even realized.

  • Pravinsash

    I got the impression from Permutation City that the author goes even further. The dust doesn’t have to be in the right configuration by chance. The configurations perceive themselves in ANY pattern of dust, much like an alphabet soup might contain self perceiving stories. This is very similar to what Hans Moravec suggested in Simulation, Consciouness and Existence.
    I find it both stunning and unsatisfactory at the same time. What counts as an encoding ? Any way to test it ? Also Greg Egan claims the same problem with it that the Boltzmann brain has. We seem to live in a universe that has an abundance of order and galaxies, far more than might be required for a self perceiving brain. An embarrassment of riches, as a physicist put it.

  • Balassa Márton

    Is there even need for an encoding? The code of successive states as well as the transition rules are themselves merely mathematical objects, like the Nth digit of Pi, or Pythagoras’ theorem – existing regardless of some kind of sentient being expressing them. So no need for an actual, physical simulation, any system that is describable mathematically, does exist just by itself.