Classical Computing Embraces Quantum Ideas

IBM Research

A silicon chip housing three qubits, built by researchers at IBM, who hope to someday build chips containing hundreds or thousands of qubits.

Someday, quantum computers may be able to solve complex optimization problems, quickly mine huge data sets, simulate the kind of physics experiments that currently require billion-dollar particle accelerators, and accomplish many other tasks beyond the scope of present-day computers. That is, if they are ever built. But even as daunting technical challenges keep the dream at bay, theorists are increasingly putting the ideas and techniques of quantum computing to work solving deep, long-standing problems in classical computer science, mathematics and cryptography.

“There are quite vigorous debates about whether quantum computers will ever actually be built,” said Chris Peikert, a cryptographer and computer scientist at Georgia Institute of Technology. “But that’s a separate question from whether quantum techniques or quantum algorithms can help you solve problems in new ways.”

In recent years, quantum ideas have helped researchers prove the security of promising data encryption schemes called lattice-based cryptosystems, some applications of which can shroud users’ sensitive information, such as DNA, even from the companies that process it. A quantum computing proof also led to a formula for the minimum length of error-correcting codes, which are safeguards against data corruption.

Quantum ideas have also inspired a number of important theoretical results, such as a refutation of an old, erroneous algorithm that claimed to efficiently solve the famously difficult traveling salesman problem, which asks how to find the fastest route through multiple cities.

“If it only happened once it would be a coincidence, but there are so many instances when we ‘think quantumly’ and come up with a proof,” said Oded Regev, a computer scientist at New York University.

Oded Regev

Lattices with hundreds of dimensions (instead of the three dimensions pictured here) can be used for data encryption, thanks to a security proof that emerged from quantum computing. 

This recurring theme has led some researchers to argue that quantum computing is not an esoteric subfield of computer science, but rather a generalization of classical computing, in much the same way that polygons are a generalization of triangles. Just as polygons can have any number of sides while triangles only have three, quantum computers can perform operations represented by any numbers (positive or negative, real or imaginary), while operations on classical computers use only nonnegative real numbers.

As the more general case, quantum ideas are a powerful tool in developing more specific classical computing proofs. “There are a number of classical problems that have nothing to do with quantum, but that are most easily analyzed by generalizing to the quantum level, proving something using quantum information theory, and scaling back the result to the classical level,” said Ronald de Wolf, a theoretical computer scientist at the Dutch Centre for Mathematics and Computer Science.

Currently, it is estimated that fewer than 5 percent of theoretical computer scientists study quantum computing. But researchers say that recent success from “thinking quantumly” has led a growing number of theorists to brush up on their physics. “These very striking spinoffs of quantum computing have actually drawn classical computer scientists into learning something about quantum computing,” said Scott Aaronson, a theoretical computer scientist at the Massachusetts Institute of Technology.

The goal of quantum computing is to harness the peculiar behavior of particles at the quantum scale in order to perform calculations that aren’t believed to be feasible with conventional computers. An ordinary computer stores “bits” of information in transistors, which, like switches, can be configured in one of two states, represented by “1” or “0.” A quantum computer stores “qubits” of information in subatomic particles, such as electrons or photons, which can exist in state 1 or 0, or in a superposition of both states, and which can become entangled with one another, so that the state of one qubit decides the state of another.

IBM Research

A close-up view of a silicon chip housing three qubits. 

Superposition and entanglement cause qubits to behave very differently from bits. Whereas a two-bit circuit in a conventional computer can be in only one of four possible states (0 and 0, 0 and 1, 1 and 0, or 1 and 1), a pair of qubits can be in a combination of all four. As the number of qubits in the circuit increases, the number of possible states, and thus the amount of information contained in the system, increases exponentially. A quantum computer with just a few hundred qubits would be able to solve certain problems more quickly than today’s supercomputers.

The only problem is that no one has managed to construct a quantum circuit with more qubits than you can count on both hands. Chris Lirakis, a physicist in the superconducting quantum computation group at IBM Research, explained that in order to keep the delicate entanglement of a system of qubits from collapsing, the system must be isolated and cooled to a temperature near absolute zero. At the same time, the qubits must be spaced about a centimeter apart to prevent an operation performed on one qubit from altering the states of neighboring ones. This challenge would make a thousand-qubit system far too large to fit into the kinds of refrigerators that can achieve such extreme cooling.

“There are a lot of really serious engineering challenges that need to be brought to bear in order to make the system scalable,” Lirakis said. “It’s this tug-of-war between all these different issues.”

Regev, who worked with Peikert in using quantum ideas to prove the security of lattice-based cryptosystems, says he hopes quantum computers will be built in his lifetime so he can see them in action. “But quantum has made such a great impact that even if quantum computers are never built, I wouldn’t care too much,” he said.

As quantum techniques become more popular among computer scientists, they will likely produce more classical results. “It’s these results that convince me that even if the universe had not been quantum mechanical,” Aaronson said, “eventually computer scientists would have invented quantum computing as a proof tool.”

This article was reprinted on

View Reader Comments (3)

Leave a Comment

Reader CommentsLeave a Comment

  • Nice article, and probably more timely than the author realized when it was written, especially when considering what D-Wave has accomplished since. I wonder what the author would think of the Simulation Hypthesis, the philosopher Nick Bostrom, and books such as “On Computer Simulated Universes”. All of this seems more plausible now.

  • Applying quantum computing to classical computing may be productive, but it is not at all clear that quantum computing itself will ever produce much. If I understand this correctly (and it is not my field, so I may be mistaken), one of the implicit assumptions of quantum computing is that there is no limit, in principle, to the number of states that may be superposed. I suspect that this assumption is true if, and only if, space (or space-time if you prefer) is a continuum. I very much doubt that it is a continuum, and I suspect that the “graininess” of space-time will impose some limit on just how much superposition is achievable even in principle.

    Does anyone know enough about these things to feel able to calculate (to an order of magnitude) what sort of limit we would be talking about here, and whether it would be a serious constraint to building a quantum computer that was useful in practice?

  • Linda:

    There are definitely theoretical physicists working on ideas in which space-time isn't a normal classical continuum at sufficiently tiny scales, usually at distances around the Planck length (with is rough 10-to-the-minus-35 of a meter — extremely small indeed): there are a wide range of them such as string theory (which hides the non-continuum nature of space at that scale better than some other models, but definitely has it), space-time foams in loop quantum gravity, noncommutative geometries, causal sets, and probably a dozen others. In fact it seems to be a pretty general property of attempts to unify quantum mechanics and relativity that something rather odd has to happen to the nature of space-time at around that scale. But in just about all of those that I can think of, behavior down at that scale is either still entirely quantum, or in some case acquires even more quantum weirdness than regular quantum mechanics does (the uncertainty principle gets even stronger, with an additional uncertainty term added due to gravitational effect like tiny black holes forming and evaporating). But I can't think of any of them that would clearly impose any effective upper limit on the number of states that can be superposed in a quantum computer. Even if you assume that space-time is discrete at that scale, made up of little 'particles' of space-time glued together somehow, like a lattice or a foam (which some but not all of these models do, and seems like the most restrictive assumption), these model almost all still fit into Feynman 'sum over histories' interpretation of quantum mechanics, and in that it's very clear that the number of states a quantum computer can superimpose doubles every time you add another qubit to your quantum computer: so it scales exponentially with the size of thour quantum computer, with no limit. (About the only exception would be those models that try to derive both relativity quantum mechanical behavior from Wick-rotated statistical behavior, where I'm not familiar enough with how they work to tell you whether they make a difference or not: I believe there's still a sum involved, but I assume it's no longer exactly a Feynman sum over histories.)

    Theoretical physicists have spent a lot of time trying to think of ways that any of the hypothesized forms of Plank-scale graininess of space-time could possibly affect anything big enough that we'd have any chance of measuring it in a lab or with a telescope, and experimental physicists have tested every even-vaguely plausible idea any theoretician came up with. So far the result has been a big fat nothing — its seems that once you get up to any scale we can reach (all of which are far, far bigger than the Planck scale), the minute graininess of space-time has no measurable effects at all: to all intents and purposes, it might as well be a continuum. If it imposed limits on quantum computation, I'm sure some theoretician would have pointed this out by now, and experimentalists would likely have come up with a cunning way to test it in some condensed matter system without having an actual quantum computer.

    So I strongly suspect your suspicion is incorrect: I don't think the graininess of space-time down at Plank scale is going to impose any practical limitations on quantum computers.

Comments are closed.