Finally, a Fast Algorithm for Shortest Paths on Negative Graphs
Introduction
In algorithms, as in life, negativity can be a drag.
Consider the problem of finding the shortest path between two points on a graph — a network of nodes connected by links, or edges. Often, these edges aren’t interchangeable: A graph could represent a road map on which some roads are slower than others or have higher tolls. Computer scientists account for these differences by pairing each edge with a “weight” that quantifies the cost of moving across that segment — whether that cost represents time, money or something else. Since the 1970s, they’ve known how to find shortest paths essentially as fast as theoretically possible, assuming all weights are positive numbers.
But on some graphs weights can be negative — traveling along one segment can offset the cost of traversing another. Consider, for instance, a delivery driver who must balance the cost of gas and tolls (represented by positive weights) against income from transporting packages (represented by negative weights). In such cases, the fastest known shortest-path algorithm doesn’t work. For decades, fast algorithms for finding shortest paths on negative-weight graphs have remained elusive.
Now a trio of computer scientists has solved this long-standing problem. Their new algorithm, which finds the shortest paths through a graph from a given “source” node to every other node, nearly matches the speed that positive-weight algorithms achieved so long ago.
What’s more, the new approach uses decades-old mathematical techniques, eschewing more sophisticated methods that have dominated modern graph theory research.
“I just couldn’t believe such a simple algorithm exists,” said Maximilian Probst Gutenberg, a computer scientist at the Swiss Federal Institute of Technology Zurich. “All of it has been there for 40 years. It just took someone to be really clever and determined to make it all work.”
The Limits of Greed
The story begins in 1956, when the Dutch computer scientist Edsger Dijkstra developed a fast algorithm to find shortest paths on a graph with only positive weights. To understand it, imagine starting from the source and exploring the graph one node at a time, jotting down the weights of newly discovered edges as you go. Each time you visit a node, make preliminary estimates of the shortest paths from the source to each of the new node’s neighbors, updating any existing estimates if you’ve found a new shorter path. To decide which unexplored node to visit next, use what’s called a greedy strategy: Go to whichever is closest to the source according to your current estimate.
With positive weights, the path that Dijkstra’s algorithm takes to visit each node for the first time is truly the shortest. It’s easiest to see that this is true for the very first step. Imagine two nodes A and B connected by an edge with weight 2. If A is the source node, and every other edge touching it has a larger weight, then the direct path from A to B must be the shortest possible path connecting those two points, since the first segment of any other path would already be longer. Similar reasoning works at each step. The algorithm never has to look back, so it’s guaranteed to finish after running through the graph once — that’s what makes it fast. In the 1970s, researchers improved Dijkstra’s algorithm, making it nearly as fast as theoretically possible.
But negative weights spell trouble for Dijkstra’s greedy strategy. Consider again our delivery driver. A direct route from A to B that nets a small profit may earn less money than a circuitous path that has a big payoff somewhere. “You can’t make decisions just based on the local information,” said Sanjeev Khanna, a computer scientist at the University of Pennsylvania. “You may have to make several seemingly suboptimal moves to finally get a real reward.”
For decades, computer scientists working on negative-weight graphs tried to match the speed of Dijkstra’s algorithm with similar “combinatorial” algorithms. These involve discrete operations — like counting possibilities, modifying weights and selectively deleting edges — that reflect the discrete structure of the underlying graph. Progress slowed by the 1990s, however. More recently, researchers have used “continuous optimization” algorithms, which borrow tricks from calculus. Unfortunately, the resulting speedups have been limited, and have often come at the cost of simplicity.
Break the Cycle
In the summer of 2021, two computer scientists who’d become colleagues at the University of Copenhagen — Danupon Nanongkai and Christian Wulff-Nilsen — were searching for a topic for a joint research project. “Christian said, ‘Oh, by the way, I was on vacation, and because of that I was trying to think of something very ambitious,’” recalled Nanongkai, who’s now at the Max Planck Institute for Informatics in Saarbrucken, Germany. They settled on the negative-weight shortest-paths problem and invited Aaron Bernstein of Rutgers University to join them.
All three researchers were experts in combinatorial graph algorithms for other problems, and they wanted to see how far these relatively ancient approaches could get them. “There’s actually a certain freedom in working on a problem that’s ambitious and has been open for a long time,” Bernstein said.
The trio started by temporarily ignoring a subset of possible graphs: those containing negative cycles. These are paths that loop back to where they started after passing through a series of edges whose weights add up to a negative number. In a graph with negative cycles reachable from the starting point, the notion of a shortest path breaks down, since you can make the distance to any node as negative (or as profitable) as you like, by taking repeated laps around the negative cycle before heading off to your destination.
The researchers suspected that long negative paths were mainly responsible for making the problem difficult. So they began to focus on tight clusters of nearby nodes, which can’t contain any long negative paths: That’s because if two points are connected by a short positive path, adding a long negative path between them would create a negative cycle. Within a tight cluster, “the fact that everyone is close together in a positive sense actually gives you useful information about the negative edges as well,” Bernstein said. “It tells you things can’t be too negative.”
Most graphs contain many such tight-knit clusters that are only weakly connected to each other. If the researchers could pinpoint all the clusters, they suspected they could develop a way to find shortest paths quickly within each one. From there, they might have an easier time connecting individual clusters and finding the shortest paths on the original graph. But that would require quickly spotting the regions of any graph in which nodes are close together — something they didn’t know how to do. The key turned out to be a technique that originated in an entirely different branch of graph theory.
Cutting Up Graphs
In the 1980s, computer scientists developed a technique called low-diameter decomposition to pick out tight clusters in a graph and identify the edges to delete to separate those clusters. This technique provides a way to divide graphs into independent sections. It was invented to facilitate “distributed” algorithms, in which computations run in parallel on different parts of a graph, so it was less obviously useful for shortest-paths algorithms, which don’t have this property.
Bernstein, Nanongkai and Wulff-Nilsen realized that low-diameter decomposition could help them identify clusters without much concentrated negativity. Unfortunately, standard low-diameter decomposition algorithms only work on undirected graphs — those in which every edge can be traversed in both directions. The negative-weight shortest-paths problem, meanwhile, only makes sense on directed graphs, in which every edge is a one-way street. (Otherwise, a single undirected negative edge would create a negative cycle consisting of repeated hops back and forth across that edge.) If the researchers wanted to use low-diameter decomposition, they’d have to adapt it.
That’s what they did in their new paper. Inspired by past work in which Bernstein and Wulff-Nilsen had collaborated with Probst Gutenberg, they developed a fracturing procedure for directed graphs analogous to low-diameter decomposition. The procedure chops up an arbitrary directed graph into a series of tight-knit clusters by using a random process to delete just a handful of edges. Afterward, those clusters are connected by a sparser network in which all the edges point in the same direction. That kind of network is called a directed acyclic graph, or DAG.
Think of a DAG like a stream in which water may flow down different paths: Some paths flow in from different sources, others fan out in different directions, and still others may split apart and merge back together. But nothing ever flows backward, so there are no cycles; this makes DAGs much easier to work with.
Researchers have long known how to rapidly find shortest paths on DAGs even with negative weights. So the fracturing technique enabled the three researchers to reduce any directed graph to a combination of two special cases — DAGs and tight clusters — that were each easy to handle.
The new shortest-paths algorithm repeatedly uses the fracturing procedure to break a graph into tight-knit clusters connected by a DAG. It then breaks those clusters apart further and further. At the end of the process, the clusters at the innermost level are as closely connected as possible. Part of the reason the algorithm is so fast is that it doesn’t take many iterations to fully break down even a very large graph, just as it doesn’t take long to cut a large number down to a reasonable size if you repeatedly divide it in half.
With the graph fully broken down in this manner, the researchers could quickly find shortest paths through every part of the graph. For the tight clusters at the innermost level of the nested graph structure, this was easy — they had practically no negativity left. And the researchers already knew how to find shortest paths on the DAG sections joining them.
Finally, the algorithm adds back the edges eliminated by the fracturing process and calculates their effects on the shortest paths. The researchers proved that their process for randomly deleting edges would almost always require just a few deletions to eliminate “backward” edges — the kind that would turn their DAG into a graph with large cycles. That made it extremely unlikely that any shortest path would pass through too many such backward segments, so they could solve this tricky final step by combining two textbook methods from the 1950s: Dijkstra’s algorithm and the first algorithm developed for negative-weight graphs.
“It’s an extremely clever composition of these ideas,” Khanna said. The algorithm is the first for negative-weight graphs that runs in “near-linear” time — which means its runtime is nearly proportional to the time required just to count all the edges, the fastest it could possibly be.
And what of the graphs with negative cycles, which the researchers decided to ignore at the start? After putting the finishing touches on their shortest-paths algorithm, they showed that it could also work as a fast algorithm for pinpointing negative cycles. Virtually no graph was beyond its reach.
Parallel Paths
Bernstein presented the team’s result at the 2022 Foundations of Computer Science conference, where their manuscript describing the new algorithm was deemed one of two best papers. The other paper also happened to describe a new near-linear-time algorithm for solving a long-standing problem in graph theory.
That algorithm, developed by Probst Gutenberg and five other researchers, addressed a more general problem called minimum-cost flow, in which the goal is to optimize transport through many paths in parallel, and each edge has a maximum capacity as well as an associated cost. Shortest-paths problems are a special case of minimum-cost flow, so the new minimum-cost-flow algorithm could also be used to solve the negative-weight shortest-paths problem in near-linear time, albeit with a radically different approach.
The team working on minimum-cost flow developed their general-purpose fast algorithm using an intricate synthesis of combinatorial and continuous optimization techniques that makes it unwieldy in practice, at least currently. The combinatorial algorithm by Bernstein and his colleagues, though restricted to a more specific problem, achieves its near-linear runtime without sacrificing simplicity.
“This is what’s so astonishing about this paper,” said Probst Gutenberg. “You can explain it to an undergraduate student, and you can also implement it on your computer.”
As a result, this new algorithm has revived interest in combinatorial approaches to other problems in graph theory. It remains to be seen which problems can be solved rapidly using purely combinatorial algorithms, and which really require the continuous techniques developed in the past 20 years.
“This is a philosophical question that I’m trying to understand,” Nanongkai said. “This shortest-path problem gives some hope.”
Correction: January 20, 2023
The original version of this article implied that Dijkstra’s original shortest-path algorithm, from the 1950s, achieved the theoretical speed limit for positive-weight graphs. In fact, the speed limit was reached during the 1970s, after researchers made improvements to Dijkstra’s algorithm.