In 2008, Daniel Spielman told his Yale University colleague Gil Kalai about a computer science problem he was working on, concerning how to “sparsify” a network so that it has fewer connections between nodes but still preserves the essential features of the original network.

Network sparsification has applications in data compression and efficient computation, but Spielman’s particular problem suggested something different to Kalai. It seemed connected to the famous Kadison-Singer problem, a question about the foundations of quantum physics that had remained unsolved for almost 50 years.

Over the decades, the Kadison-Singer problem had wormed its way into a dozen distant areas of mathematics and engineering, but no one seemed to be able to crack it. The question “defied the best efforts of some of the most talented mathematicians of the last 50 years,” wrote Peter Casazza and Janet Tremain of the University of Missouri in Columbia, in a 2014 survey article.

As a computer scientist, Spielman knew little of quantum mechanics or the Kadison-Singer problem’s allied mathematical field, called C*-algebras. But when Kalai, whose main institution is the Hebrew University of Jerusalem, described one of the problem’s many equivalent formulations, Spielman realized that he himself might be in the perfect position to solve it. “It seemed so natural, so central to the kinds of things I think about,” he said. “I thought, ‘I’ve got to be able to prove that.’” He guessed that the problem might take him a few weeks.

Instead, it took him five years. In 2013, working with his postdoc Adam Marcus, now at Princeton University, and his graduate student Nikhil Srivastava, now at the University of California, Berkeley, Spielman finally succeeded. Word spread quickly through the mathematics community that one of the paramount problems in C*-algebras and a host of other fields had been solved by three outsiders — computer scientists who had barely a nodding acquaintance with the disciplines at the heart of the problem.

Mathematicians in these disciplines greeted the news with a combination of delight and hand-wringing. The solution, which Casazza and Tremain called “a major achievement of our time,” defied expectations about how the problem would be solved and seemed bafflingly foreign. Over the past two years, the experts in the Kadison-Singer problem have had to work hard to assimilate the ideas of the proof. Spielman, Marcus and Srivastava “brought a bunch of tools into this problem that none of us had ever heard of,” Casazza said. “A lot of us loved this problem and were dying to see it solved, and we had a lot of trouble understanding how they solved it.”

“The people who have the deep intuition about why these methods work are not the people who have been working on these problems for a long time,” said Terence Tao, of the University of California, Los Angeles, who has been following these developments. Mathematicians have held several workshops to unite these disparate camps, but the proof may take several more years to digest, Tao said. “We don’t have the manual for this magic tool yet.”

Computer scientists, however, have been quick to exploit the new techniques. Last year, for instance, two researchers parlayed these tools into a major leap forward in understanding the famously difficult traveling salesman problem. There are certain to be more such advances, said Assaf Naor, a mathematician at Princeton who works in areas related to the Kadison-Singer problem. “This is too profound to not have many more applications.”

**A Common Problem**

The question Richard Kadison and Isadore Singer posed in 1959 asks how much it is possible to learn about a “state” of a quantum system if you have complete information about that state in a special subsystem. Inspired by an informally worded comment by the legendary physicist Paul Dirac, their question builds on Werner Heisenberg’s uncertainty principle, which says that certain pairs of attributes, like the position and the momentum of a particle, cannot simultaneously be measured to arbitrary precision.

Kadison and Singer wondered about subsystems that contain as many different attributes (or “observables”) as can compatibly be measured at the same time. If you have complete knowledge of the state of such a subsystem, they asked, can you deduce the state of the entire system?

In the case where the system you’re measuring is a particle that can move along a continuous line, Kadison and Singer showed that the answer is no: There can be many different quantum states that all look the same from the point of view of the observables you can simultaneously measure. “It is as if many different particles have exactly the same location simultaneously — in a sense, they are in parallel universes,” Kadison wrote by email, although he cautioned that it’s not yet clear whether such states can be realized physically.

Kadison and Singer’s result didn’t say what would happen if the space in which the particle lives is not a continuous line, but is instead some choppier version of the line — if space is “granular,” as Kadison put it. This is the question that came to be known as the Kadison-Singer problem.

Based on their work in the continuous setting, Kadison and Singer guessed that in this new setting the answer would again be that there are parallel universes. But they didn’t go so far as to state their guess as a conjecture — a wise move, in hindsight, since their gut instinct turned out to be wrong. “I’m happy I’ve been careful,” Kadison said.

Kadison and Singer — now at the University of Pennsylvania and the Massachusetts Institute of Technology (emeritus), respectively — posed their question at a moment when interest in the philosophical foundations of quantum mechanics was entering a renaissance. Although some physicists were promoting a “shut up and calculate” approach to the discipline, other, more mathematically inclined physicists pounced on the Kadison-Singer problem, which they understood as a question about C*-algebras, abstract structures that capture the algebraic properties not just of quantum systems but also of the random variables used in probability theory, the blocks of numbers called matrices, and regular numbers.

C*-algebras are an esoteric subject — “the most abstract nonsense that exists in mathematics,” in Casazza’s words. “Nobody outside the area knows much about it.” For the first two decades of the Kadison-Singer problem’s existence, it remained ensconced in this impenetrable realm.

Then in 1979, Joel Anderson, now an emeritus professor at Pennsylvania State University, popularized the problem by proving that it is equivalent to an easily stated question about when matrices can be broken down into simpler chunks. Matrices are the core objects in linear algebra, which is used to study mathematical phenomena whose behavior can be captured by lines, planes and higher-dimensional spaces. So suddenly, the Kadison-Singer problem was everywhere. Over the decades that followed, it emerged as the key problem in one field after another.

Because there tended to be scant interaction between these disparate fields, no one realized just how ubiquitous the Kadison-Singer problem had become until Casazza found that it was equivalent to the most important problem in his own area of signal processing. The problem concerned whether the processing of a signal can be broken down into smaller, simpler parts. Casazza dived into the Kadison-Singer problem, and in 2005, he, Tremain and two co-authors wrote a paper demonstrating that it was equivalent to the biggest unsolved problems in a dozen areas of math and engineering. A solution to any one of these problems, the authors showed, would solve them all.

One of the many equivalent formulations they wrote about had been devised just a few years earlier by Nik Weaver, of Washington University in St. Louis. Weaver’s version distilled the problem down to a natural-sounding question about when it is possible to divide a collection of vectors into two groups that each point in roughly the same set of directions as the original collection. “It’s a beautiful problem that brought out the core combinatorial problem” at the heart of the Kadison-Singer question, Weaver said.

So Weaver was surprised when — apart from the mention in Casazza’s survey and one other paper that expressed skepticism about his approach — his formulation seemed to meet with radio silence. He thought no one had noticed his paper, but in fact it had attracted the attention of just the right people to solve it.

**Electrical Properties**

When Spielman learned about Weaver’s conjecture in 2008, he knew it was his kind of problem. There’s a natural way to switch between networks and collections of vectors, and Spielman had spent the preceding several years building up a powerful new approach to networks by viewing them as physical objects. If a network is thought of as an electrical circuit, for example, then the amount of current that runs through a given edge (instead of finding alternate routes) provides a natural way to measure that edge’s importance in the network.

Spielman discovered Weaver’s conjecture after Kalai introduced him to another form of the Kadison-Singer problem, and he realized that it was nearly identical to a simple question about networks: When is it possible to divide up the edges of a network into two categories — say, red edges and blue edges — so that the resulting red and blue networks have similar electrical properties to the whole network?

It’s not always possible to do this. For instance, if the original network consists of two highly connected clusters that are linked to each other by a single edge, then that edge has an outsize importance in the network. So if that critical edge is colored red, then the blue network can’t have similar electrical properties to the whole network. In fact, the blue network won’t even be connected.

Weaver’s problem asks whether this is the only type of obstacle to breaking down networks into similar but smaller ones. In other words, if there are enough ways to get around in a network — if no individual edge is too important — can the network be broken down into two subnetworks with similar electrical properties?

Spielman, Marcus and Srivastava suspected that the answer was yes, and their intuition did not just stem from their previous work on network sparsification. They also ran millions of simulations without finding any counterexamples. “A lot of our stuff was led by experimentation,” Marcus said. “Twenty years ago, the three of us sitting in the same room would not have solved this problem.”

The simulations convinced them that they were on the right track, even as the problem raised one stumbling block after another. And they kept making spurts of progress, enough to keep them hooked. When Marcus’ postdoctoral fellowship expired at the end of the team’s fourth year working on the problem, he elected to leave academia temporarily and join a local startup called Crisply rather than leave New Haven. “I worked for my company four days a week, and then once a week or so I would go to Yale,” he said.

A network’s electrical properties are governed by a special equation called the network’s “characteristic polynomial.” As the trio performed computer experiments on these polynomials, they found that the equations seemed to have hidden structure: Their solutions were always real numbers (as opposed to complex numbers), and, surprisingly, adding these polynomials together always seemed to result in a new polynomial with that same property. “These polynomials were doing more than we gave them credit for,” Marcus said. “We used them as a way of transferring knowledge, but really the polynomials seemed to be containing knowledge themselves.”

Piece by piece, the researchers developed a new technique for working with so-called “interlacing polynomials” to capture this underlying structure, and finally, on June 17, 2013, Marcus sent an email to Weaver, who had been his undergraduate advisor at Washington University 10 years earlier. “I hope you remember me,” Marcus wrote. “The reason I am writing is because we … think we have solved your conjecture (the one that you showed was equivalent to Kadison-Singer).” Within days, news of the team’s achievement had spread across the blogosphere.

The proof, which has since been thoroughly vetted, is highly original, Naor said. “What I love about it is just this feeling of freshness,” he said. “That’s why we want to solve open problems — for the rare events when somebody comes up with a solution that’s so different from what was before that it just completely changes our perspective.”

Computer scientists have already applied this new point of view to the “asymmetric” traveling salesman problem. In the traveling salesman problem, a salesman must travel through a series of cities, with the goal of minimizing the total distance traveled; the asymmetric version includes situations in which the distance from A to B differs from the distance from B to A (for instance, if the route includes one-way streets).

The best-known algorithm for finding approximate solutions to the asymmetric problem dates back to 1970, but no one knew how good its approximations were. Now, using ideas from the proof of the Kadison-Singer problem, Nima Anari, of the University of California, Berkeley, and Shayan Oveis Gharan, of the University of Washington in Seattle, have shown that this algorithm performs exponentially better than people had realized. The new result is “major, major progress,” Naor said.

The proof of the Kadison-Singer problem implies that all the constructions in its dozen incarnations can, in principle, be carried out — quantum knowledge can be extended to full quantum systems, networks can be decomposed into electrically similar ones, matrices can be broken into simpler chunks. The proof won’t change what quantum physicists do, but it could have applications in signal processing, since it implies that collections of vectors used to digitize signals can be broken down into smaller frames that can be processed faster. The theorem “has potential to affect some important engineering problems,” Casazza said.

But there’s a big gulf between principle and practice. The proof establishes that these various constructions exist, but it doesn’t say how to carry them out. At present, Casazza says, “there isn’t a chance in hell” of pulling a useful algorithm out of the proof. However, now that mathematicians know that the question has a positive answer, he hopes that a constructive proof will be forthcoming — not to mention a proof that mathematicians in his field can actually understand. “All of us were completely convinced it had a negative answer, so none of us was actually trying to prove it,” he said.

Mathematicians in the fields in which the Kadison-Singer problem has been prominent may feel wistful that three outsiders came in and solved “their” central problem, but that’s not what really happened, Marcus said. “The only reason we could even try to solve such a problem is because people in that field had already removed all the hardness that was happening” in C*-algebras, he said. “There was just one piece left, and that piece wasn’t a problem they had the techniques to solve. I think the reason why this problem lasted 50 years is because it really had two parts that were hard.”

Throughout the five years he spent working on the Kadison-Singer problem, Marcus said, “I don’t think I could have told you what the problem was in the C*-algebra language, because I had no clue.” The fact that he, Srivastava and Spielman were able to solve it “says something about what I hope will be the future of mathematics,” he said. When mathematicians import ideas across fields, “that’s when I think these really interesting jumps in knowledge happen.”

*This article was reprinted on Wired.com. A version of this article also appeared on WNYC’s Hypothesis.*

I'd love to see links to papers referenced in the article, perhaps arXive preprints.

Fascinating and well done. Your explanation was clear to a retired high school mathematics teacher.

Much much love for this….

Highly fascinating…..makes one want to go back to grad school…

The Circuit Builders illustration top of the article! Whence?

The Kadison-Singer paper is here, and you can find more at the arixv. http://arxiv.org/abs/1306.3969

I find the title misleading. Adam Marcus is a PhD in my mathematics department. (His PhD is in a joint degree program, but every student has to name a home department, which in his case is mathematics.) Adam has been supported by the Mathematics division of the NSF. Dan Spielman is a professor of applied mathematics, and Dan Spielman is a professor of applied mathematics and computer science. All three were awarded a well known mathematics prize, the Polya prize.

This may be little more than an exercise in word association on my part, but when I think of breaking things up into "smaller, simpler chunks", two things come readily to mind: (1) factorization, and (2) the classification of the finite simple groups.

Is it possible that the ideas used to solve the Kadison-Singer problem may yield insight into the legendarily difficult challenge of factoring ( * RSA, where is thy sting? * ), as well as helping to explain * why * the only finite simple groups seen * are * the ones actually stipulated in the classification of the finite simple groups.

A mathematical reflex (I hesitate to call it something as grand as an * idea * .): is the "asymmetricness" in the asymmetric traveling salesman problem — in which the distance from A to B may * not * equal the distance from B to A — in any way connected with the notion of noncommutative geometry?

* One wonders many things…. *

The second paragraph in my comment should have ended with a question mark — * not * a period.

Michael Lacey — Indeed, theoretical computer science is essentially a branch of mathematics. What we were trying to convey in the headline and the body of the text is that Marcus, Spielman and Srivastava are outsiders from the point of view of the fields of mathematics most closely connected with the Kadison-Singer problem, such as C*-algebras.

Justin Greenough — there are links sprinkled through the text; perhaps they're a bit hard to spot.

"Word spread quickly through the mathematics community that one of the paramount problems in C*-algebras and a host of other fields had been solved by three outsiders — computer scientists who had barely a nodding acquaintance with the disciplines at the heart of the problem."

I'm pretty sure that three computer science PHDs at elite universities has more that just a "nodding acquaintance" with Mathematics. It would be more accurate to claim that an"outsider" had solved the problem if it was cracked by a Literature or History professor..

@Gzoref

> I'm pretty sure that three computer science PHDs at elite universities has more that just a "nodding acquaintance" with Mathematics.

Alas, I fear it is very difficult to convey to non-mathematicians just how vast the subject is. In fact, let alone computer scientists, even mathematicians specializing in some other field of functional analysis (the broad subject that can be said to 'contain' C*-algebras) can be considered outsiders. This is why what this article documents is remarkable.

There is no way on Tao's verdant vector* that Kadison-Singer can be entirely unrelated to Tracy-Widom: * there's gotta be a link…. *

* Formerly known as God's green earth.

I wonder if the areas that the proof would extend to would include neuroscience? Being able to either simplify neural networks or extend simpler neural interlinks to larger network structures would appear to be of vast value.

It is not quite clear to me how this differs from G. Kron's method of cutting matrices into smaller matrices published in 1950 or thereabouts. I read an article in IEEE more than 40 years ago as I will recal. Any good links?

<i>C*-algebras are an esoteric subject — “the most abstract nonsense that exists in mathematics,” in Casazza’s words. “Nobody outside the area knows much about it.”</i>

They are? I thought the "abstract nonsense" moniker was used for Category Theory (an area of some use in Computer Science, arrows are everywhere there, unfortunately I have never heard about it till rather recently). The description of C*-algebras given by https://en.wikipedia.org/wiki/C*-algebra seems rather straightforward, they do not seem much more abstract than, say, vector spaces? Could be introduced in high school, really.

@Stilling

You seem to be talking about Kron Reduction: http://arxiv.org/abs/1102.2950 (or paywalled at IEEE Xplore) – I have never heard of it but it seems to be about replacing a network by a simpler equivalent one from the point of view of a given node.

Here, the problem is more complex:

"Spielman discovered Weaver’s conjecture after Kalai introduced him to another form of the Kadison-Singer problem, and he realized that it was nearly identical to a simple question about networks: When is it possible to divide up the edges of a network into two categories — say, red edges and blue edges — so that the resulting red and blue networks have similar electrical properties to the whole network?"

It comes down to what it means to have "similar electrical properties", I suppose it means that NO node in the RED and BLUE networks can distinguish whether it is in the original network or else in one of the RED and BLUE networks (which I suppose are not unique). Computationally, this sounds much harder to do than Kron Reduction. (Idle thought: this sounds relevant to cryptography somehow, but I can't put my finger on it)

@Yatima

I agree, C* algebras are rather straight forward. They're just vector spaces with some additional structure, easy examples are the complex numbers and matrices over the complex numbers.