Imagine you’re shopping for a new car, and the salesperson says, “Did you know, this car doesn’t just drive on the road.”

“Oh?” you reply.

“Yeah, you can also use it to do other things. For instance, it folds up to make a pretty good bicycle. And it folds out to make a first-rate airplane. Oh, and when submerged it works as a submarine. And it’s a spaceship too!”

Quantized

A monthly column in which top researchers explore the process of discovery. This month’s columnist, Michael Nielsen, is a computer scientist and author of three books.

You’d assume the salesperson was joking. But we take a comparable flexibility for granted in our computers. We can use the same machine to fly past the Statue of Liberty with a flight simulator, make financial projections using a spreadsheet, chat with friends on Facebook, and do many other things. It’s very nearly as astonishing as a single machine that works as a car, bicycle and spaceship.

Two characteristics of computers make this flexibility possible. First, computers are programmable. That is, by inputting an appropriate sequence of instructions, we can change a computer’s behavior. Second, computers are universal. That is, with the right program we can make a computer perform any algorithmic process whatsoever, as long as the machine has enough memory and time.

These ideas of programmability and universality have become so embedded in our culture that they’re familiar even to many children. But historically they were remarkable breakthroughs. They were crystallized in a 1937 paper by Alan Turing, who argued that any algorithmic process whatsoever could be computed by a single universal, programmable computer. The machine Turing described — often known as a Turing machine — was the ancestor of modern computers.

If you had a complete understanding of the machine, you’d understand all physical processes.

To make his argument, Turing needed to show that his universal computer could perform any conceivable algorithmic process. This wasn’t easy. Until Turing’s time, the notion of an algorithm was informal, not something with a rigorous, mathematical definition. Mathematicians had, of course, previously discovered many specific algorithms for tasks such as addition, multiplication and determining whether a number is prime. It was pretty straightforward for Turing to show that those known algorithms could be performed on his universal computer. But that wasn’t enough. Turing also needed to convincingly argue that his universal computer could compute any algorithm whatsoever, including all algorithms that might be discovered in the future. To do this, Turing developed several lines of thought, each giving an informal justification for the idea that his machine could compute any algorithmic process. Yet he was ultimately uncomfortable with the informal nature of his arguments, saying “All arguments which can be given are bound to be, fundamentally, appeals to intuition, and for this reason rather unsatisfactory mathematically.”

In 1985, the physicist David Deutsch took another important step toward understanding the nature of algorithms. He made the observation that algorithmic processes are necessarily carried out by physical systems. These processes can occur in many different ways: A human being using an abacus to multiply two numbers is obviously profoundly different from a silicon chip running a flight simulator. But both are examples of physical systems, and as such they are governed by the same underlying laws of physics. With this in mind, Deutsch stated the following principle. I’ll use his words — although the language is specialized, it’s actually pretty accessible, and fun to see in the original form:

Every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means.

In other words, take any physical process at all, and you should be able to simulate it using a universal computer. It’s an amazing, *Inception*-like idea, that one machine can effectively contain within itself everything conceivable within the laws of physics. Want to simulate a supernova? Or the formation of a black hole? Or even the Big Bang? Deutsch’s principle tells you that the universal computer can simulate all of these. In a sense, if you had a complete understanding of the machine, you’d understand all physical processes.

Deutsch’s principle goes well beyond Turing’s earlier informal arguments. If the principle is true, then it automatically follows that the universal computer can simulate any algorithmic process, since algorithmic processes are ultimately physical processes. You can use the universal computer to simulate addition on an abacus, run a flight simulator on a silicon chip, or do anything else you choose.

Furthermore, unlike Turing’s informal arguments, Deutsch’s principle is amenable to proof. In particular, we can imagine using the laws of physics to deduce the truth of the principle. That would ground Turing’s informal arguments in the laws of physics and provide a firmer basis for our ideas of what an algorithm is.

In attempting this, it helps to modify Deutsch’s principle in two ways. First, we must expand our notion of a computer to include quantum computers. This doesn’t change the class of physical processes that can be simulated in principle, but it does allow us to quickly and efficiently simulate quantum processes. This matters because quantum processes are often so slow to simulate on conventional computers that they may as well be impossible. Second, we must relax Deutsch’s principle so that instead of requiring perfect simulation, we allow simulation to an arbitrary degree of approximation. That’s a weaker idea of what it means to simulate a system, but it is likely necessary for the principle to hold.

With these two modifications, Deutsch’s principle becomes:

Every finitely realizable physical system can be simulated efficiently and to an arbitrary degree of approximation by a universal model (quantum) computing machine operating by finite means.

No one has yet managed to deduce this form of Deutsch’s principle from the laws of physics. Part of the reason is that we don’t yet know what the laws of physics are! In particular, we don’t yet know how to combine quantum mechanics with general relativity. And so it’s not clear that we can use computers to simulate processes likely to involve quantum gravity, such as the evaporation of black holes.

But even without a quantum theory of gravity, we can ask whether computers can efficiently simulate the best theories of modern physics — the Standard Model of particle physics, and general relativity.

Researchers are actively working to answer these questions. Over the past few years, the physicist John Preskill and his collaborators have shown how to use quantum computers to efficiently simulate several simple quantum field theories. You can think of these as prototypes of the Standard Model of particle physics. They do not contain the full complexity of the Standard Model, but they have many of its basic ideas. While Preskill and his collaborators haven’t yet succeeded in explaining how to simulate the full Standard Model, they have overcome many technical obstacles to doing so. It’s plausible that a proof of Deutsch’s principle for the Standard Model will be found in the next few years.

The case for general relativity is murkier. General relativity allows for strange singularities that rip and tear space-time in ways that are not yet fully understood. While numerical relativists have developed many techniques for simulating specific physical situations, to my knowledge no complete, systematic analysis of how to efficiently simulate general relativity has yet been done. It’s an intriguing open problem.

In his book *The Sciences of the Artificial*, the polymath Herbert Simon distinguished between the sciences of the natural — such as physics and biology, in which we study naturally occurring systems — and sciences of the artificial, like computer science and economics, in which we study systems created by human beings.

At first glance, it seems that the artificial sciences should be special cases of the natural sciences. But as Deutsch’s principle suggests, the properties of artificial systems such as computers may be just as rich as those of naturally occurring physical systems. We can imagine using computers to simulate not only our own laws of physics, but maybe even alternate physical realities. In the words of the computer scientist Alan Kay: “In natural science, Nature has given us a world and we’re just to discover its laws. In computers, we can stuff laws into it and create a world.” Deutsch’s principle provides a bridge uniting the sciences of the natural and the artificial. It’s exciting that we’re nearing proof of this fundamental scientific principle.

*This article was reprinted on ScientificAmerican.com.*

"as long as the machine has enough memory and time." … and energy

Do you think the artificial and natural might merge in the study and modelingof the brain?

I think memory capacity and size will always be the negative counter-act towards the developement of quantum computers and on to AI and AL.

Two fundamental problems: Lorenz (chaos) and Heisenberg (uncertainty).

"Chaos: When the present determines the future, but the approximate present does not approximately determine the future." – Edward Lorenz.

"the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa." – Werner Heisenberg.

The innate inability to determine initial conditions precludes the simulation of a vast number of systems.

Nice article, I especially appreciate the diversity of sources.

For me, a layperson, the idea that the computer can simulate "everything" seems ludicrous. A computer is just a brain-child like mathematics is. We can't simulate what we don't know or can't understand.

We should also seek something different from "rationalism brain rot". You can get new perspectives from "Conceptual Blockbusting" by James L. Adams and "Think! Before It's Too Late" by Edward de Bono.

Re: dddvvv

>We can't simulate what we don't know or can't understand.

Actually, we can!

We can set up a system with a few possible states and simple rules about those states, then iterate through all the possible combinations of rules. The we can run the simulation and observe the behavior.

Simulating simple systems, and studying the rise of complicated structures in those systems can give us some insight into how and why our universe produces complicated structures.

I highly recommend Steven Wolfram's tome on the subject. "A New Kind of Science"

http://wolframscience.com/nksonline/toc.html

@Yvan: It depends of course on the details of the universal computer being used. For instance, a reversible computer can, in principle, operate with zero energy consumption. In practice, time and space are the most common places problems arise, but you're quite right that for some architectures the limitation may be better described in some other terms (e.g., energy).

@Raymond Ernst: In the sense that we're always modifying our brains, perhaps yes, one day they will merge. We'll need to understand the brain quite a bit better, though!

@Julia M. Turing: Maybe they'll block AI, although Moore's Law has certainly enabled us to make incredible progress, and maybe will carry on long enough that this won't be an impediment to AI. I don't see why they'd block quantum computing. The problem with quantum computing so far has been that it's hard to get even a tiny quantum computer to work really well, never mind scaling it up!

@Roy Lofquist: On the uncertainty principle: perhaps surprisingly, this isn't a problem. A quantum computer simulation of a quantum system will, itself, possess the same kind of uncertainties as in the system it's simulating, albeit in simulated form.

On chaos: this is more challenging, and requires a longer reply than I have time to write right now. I'll try to come back to it. It's a really interesting point!

There's a very deep, limiting assumption (or if you will, constraint) built into Turing's abstract conception of a Turing machine. One that real physics is unlikely to honor in the end.

Namely, Turning's conceptual computer is based upon a tape with symbols (in modern terms, a sequential sequence of instructions) in which previous instructions can write out new ones. If you get past his overly abstruse and confusing language, the Turing machine is a fundamentally classical creature- every algorithmic step or state change of the device is fully deterministic, and even more basic, IT OBEYS A ONE-WAY ARROW OF TIME. That is, the machine's behavior is only affected by instructions given to it within its own past (the "past" light cone of its current world line).

The problem is that this classical arrow of time us already breaking down in both relativity and quantum physics, and the next theory is likely to deform it even further if not jettison it entirely. Relativity quickly leads to the conception of the universe as a malleable 4D load of bread, where 'time' is just one of the dimensions, and quantum physics has more than a few hints of effects preceding their causes. Physical systems in which retro-causal, fundamentally random, or acausal effects play significant roles (again, assuming such things "really" exist) are NOT Turing machines- they CAN be influenced by "future instructions" or non-deterministic chaos. The existence of such physics would then imply that Turing machines only conceptually model a strict subset of physical reality, really just the subset of all possible systems that classical and semi-classical physics describes fairly accurately. The huge upside of that would be that we could build so-called "hyper-computers", machines capable of running algorithms that NO Turing machine, "quantum" or not, could handle.

dddvvv, you missed the point, it is the algorithm that is important not the hardware computer child-brain machine. The algorithm is the brain and you are right to the point that if the writers of the algorithm have no understanding of a process they will be bounded by their ignorance.

This article entirely ignores that a turing machine (computer) can only do what it can "calculate" and we have a whole branch of mathematics that proves that there are entire sets of numbers that are "uncountable".

So how does the universe – computer analogy work…

When computers can only perform what is countable, and the Universe goes far far beyond that?

How interesting. Coincidentally, yesterday I started a deep dive into Turing's paper via a book I picked up in a used bookstore. I can highly recommend THE ANNOTATED TURING: A GUIDED TOUR THROUGH ALAN TURING'S HISTORIC PAPER ON COMPUTABILITY AND THE TURING MACHINE. Those who want to go beyond digital computabity should read PROGRAMMING THE UNIVERSE; A QUANTUM COMPUTER SCIENTIST TAKES ON THE COSMOS by Seth Lloyd. Both of these books are accessible to those like me who are not fully trained in the sciences, but are nevertheless informed and curious. Twitter @SalesPhase

"Every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means."

An infinite realizable system such as the an infinite or immeasurable Universe can only be simulated equally by a computing machine operating with the same infinite means.

Outside today's finite computer box is tomorrows computer freedom!

=

@Jay Jaysen-James

Thats an interesting book. But are we like the "blind man and an elephant" feeling the tail and saying it's a broom? Do those patterns mean something or are they simple paintings we match to reality?

@JK

The algorithm under the same scrutiny. You should read Arthur C. Clarke "Childhood's end" for fun. The book envisions man left speechless by an alien world and how he deals with understanding.

These types of articles amaze me. Does the author really think that everyone is so gullible? Deutsch is merely one of many who write intelligently on computers. No one thinks he is infallible. And the claims the author makes for him are down right false. Deutch knows that Kurt Gödel has proven that no machine can solve every problem in finite numbers of steps. We have heard outrageous claims for the God like ability of computers in the past. Now when they have been all put to rest someone comes a long with no knowledge of the subject shouting hallelujah.

I wonder what the "arbitrary degree of approximation" in the modified version of Deutsch's principle means. I even wonder how one can define a "degree of approximation" for a physical system. For a single observable, that notion makes sense, but for a whole quantum system?

The problem of chaos that has already been mentioned looks like a special case of the problem of defining a degree of approximation. In a classical system and with the classical notion of computability, a system's dynamical state at any finite time can be computed to an arbitrarily specific accuracy, given sufficient resources. But you have to specify both the accuracy and the time limit, so this wouldn't qualify as an arbitrary degree of approximation for the whole system. Does quantum chaos behave better in this respect?

@James Briggs: There is no contradiction between the incompleteness theorem and Deutsch's principle. Satisfying Deutsch's principle certainly does not imply that a machine "can solve every problem in a finite number of steps".

There is an element here of everything looking like a nail simply because we have a hammer. About the simplest possible physical interaction, between a photon and an electron, is an infinite regression of virtual photons and electrons according to QED. The only reason computers can produce the correct answer is because physicists consider only situations that where the symmetries can be used to cancel out the infinities. Otherwise we would be faced with non-convergent sums. If you think about what the world would look like if our theories and tools were inadequate, it would seem random and inconceivably complex, which is exactly like what quantum events look like to us.

It seems premature to say we are nearing a proof of this when solving physics' open problems is merely a prerequisite!

@Andrew Raybould: When writing the last line I was, I must admit, thinking of the problem of proving Deutsch's principle for general relativity and the Standard Model. That does seem possible in the near future. Of course, a full proof will require that we solve the problem of quantum gravity. That may take a while!

"In a sense, if you had a complete understanding of the machine, you’d understand all physical processes." But isn't this the converse (i.e., not necessarily true) of what follows from the previous statement? If you understood all the physical processes, you would have a complete understanding of the machine. Not all that remarkable actually.

Apparently Deutsch's principle is considered to be "a law of physics", at least by Deutsch.

http://www.theguardian.com/science/2012/oct/03/philosophy-artificial-intelligence

"…That is because of a deep property of the laws of physics, namely the universality of computation. It entails that everything that the laws of physics require physical objects to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory."

It is an interesting article but at no point does he point out that the linchpin of his argument for strong AGI is his own principle which, without any evidence, he claims is a law of physics.

Is this theory falsifiable? Given an algorithm of this universe; can the universal computer tell me which of the, infinite, mathematically undecideable principles it exemplifies?

What does this idea contribute to human knowledge outside of a vague hand wave?

Does the emperor really have new clothes?

I am not convinced that the principle above is combatible with either the Strong Free Will Theorem of Conway & Kochen or the Free Randomness Theorem of Colbeck & Renner:

http://arxiv.org/abs/1105.3195

These 2 theorems are like incompleteness theorems for physics.

And here is a problem in quantum physics which is now shown to be undecidable:

http://www.nature.com/news/paradox-at-the-heart-of-mathematics-makes-physics-problem-unanswerable-1.18983

1. "EVERY FINITELY REALIZABLE PHYSICAL SYSTEM can be simulated efficiently and to an arbitrary degree of approximation by a universal model (quantum) computing machine operating by FINITE means."

2. "Satisfying Deutsch's principle certainly does not imply that a machine "can solve EVERY PROBLEM in a FINITE number of steps""

Do you mean there are properties in physical systems that are not finitely realizable, or there are problems that can not be solved in finite number of steps that are posed by finitely realizable physical systems?

IMHO, both would make Deutsch's principle quite uninteresting…

Can a machine and algorithm of finite scope simulate to any level of approximation a violation of Bells' inequality? If the answer is no, then the principle must die or be further limited?