number theory

Mathematicians Clear Hurdle in Quest to Decode Primes

Paul Nelson has solved the subconvexity problem, bringing mathematicians one step closer to understanding the Riemann hypothesis and the distribution of prime numbers.

Mia Carnevale for Quanta Magazine

Introduction

It’s been 162 years since Bernhard Riemann posed a seminal question about the distribution of prime numbers. Despite their best efforts, mathematicians have made very little progress on the Riemann hypothesis. But they have managed to make headway on simpler related problems.

In a paper posted in September, Paul Nelson of the Institute for Advanced Study has solved a version of the subconvexity problem, a kind of lighter-weight version of Riemann’s question. The proof is a significant achievement on its own and teases the possibility that even greater discoveries related to prime numbers may be in store.

“It’s a bit of a far-fetched dream, but you could hyper-optimistically hope that maybe we get some insight in how the [Riemann hypothesis] works by working on problems like this,” Nelson said.

The Riemann hypothesis and the subconvexity problem are important because prime numbers are the most fundamental — and most fundamentally mysterious — objects in mathematics. When you plot them on the number line, there appears to be no pattern to how they’re distributed. But in 1859 Riemann devised an object called the Riemann zeta function — a kind of infinite sum — which fueled a revolutionary approach that, if proved to work, would unlock the primes’ hidden structure.

“It proves a result that a few years ago would have been regarded as science fiction,” said Valentin Blomer of the University of Bonn.

Getting Complex

Riemann’s question hinges on the Riemann zeta function. The terms it adds together are the reciprocals of the whole numbers, in which the denominators are raised to a power defined by a variable, s (so $latex \frac{1}{1^{s}}$, $latex \frac{1}{2^{s}}$, $latex \frac{1}{3^{s}}$ and so on).

Riemann proposed that if mathematicians could prove a basic property of this function — what it takes for it to equal zero — they’d be able to estimate with great accuracy how many prime numbers there are along any given interval on the number line.

Prior to Riemann, Leonhard Euler constructed a similar function and used it to create a new proof that there are infinitely many primes. In Euler’s function, the denominators are raised to powers that are real numbers. The Riemann zeta function, by contrast, assigns complex numbers to the variable s, an innovation that brings the whole vast store of techniques from complex analysis to bear on questions in number theory.

Complex numbers have two parts, one real and one imaginary, the latter of which relates to the imaginary number i, defined as the square root of −1. Examples include 3 + 4i and 2 − 6i. In these cases, the 3 and the 2 are the real parts, while the 4 and −6 are the imaginary parts.

The Riemann hypothesis is about which values of s make the Riemann zeta function equal zero. It predicts that the only important, or nontrivial, values of s that do this are complex numbers whose real part equals $latex \frac{1}{2}$. (The function also equals zero whenever s is a negative even integer with an imaginary part that equals zero, but those zeros are easy to see and are considered trivial.) If the Riemann hypothesis is true, the Riemann zeta function explains how primes are distributed on the number line. (Exactly how it explains that is complicated. Quanta recently produced a video detailing just how it works.)

In the years since Riemann proposed it, the Riemann hypothesis has instigated many advances in mathematics, though mathematicians have made little progress on the question itself. Given that relative futility, they’ve at times redirected their attention to slightly easier questions which approximate Riemann’s intractable riddle.

Next to Nothing

The problem Paul Nelson solved is two steps removed from the Riemann hypothesis. Each step takes a bit of explanation.

The first is the Lindelöf hypothesis. Where the Riemann hypothesis says that the only nontrivial zeroes of the Riemann zeta function occur when the real part of s equals $latex \frac{1}{2}$, the Lindelöf hypothesis merely says that under that condition, the output of the Riemann zeta function is small in a certain precise sense.

For both the Riemann and Lindelöf hypotheses, the real part of s is fixed at $latex \frac{1}{2}$, but the imaginary part can be any number you like: 2, 537, $latex \frac{1}{2}$. One way to define “small” is to compare the number of digits in the input, s, with the number of digits in the output.

Quanta Magazine

Mathematicians can easily establish that the output never has more than 25% as many digits as the input. This means that it grows as the input grows, but it doesn’t grow disproportionately. This 25% is called the trivial bound. But the Lindelöf hypothesis says that as the inputs get larger, the size of the output is actually always bounded at 1% as many digits as the input.

For more than a century, mathematicians have worked on closing the gap between the trivial bound (25%) and the conjectured bound (1%). They have made a dozen or so improvements, the most recent in 2017 when Jean Bourgain proved that for values of s with real part $latex \frac{1}{2}$, the output of the Riemann zeta function has a size that is about 15% the size of the input. So if the input is a 1,000,000-digit number, the output won’t have more than 150,000 digits. It’s a far cry from proving the Lindelöf hypothesis, let alone Riemann’s question, but it’s something.

“We haven’t made any progress on the Riemann hypothesis in 150 years, whereas this is a question we can make incremental progress towards,” said Nelson. “There’s a way you can kind of keep score.”

The Lindelöf hypothesis is just one example of a Riemann-adjacent problem amenable to scorekeeping. In his new work, Nelson solved another problem that’s one more step removed from Riemann’s question.

Families of Functions

The Riemann zeta function is the most famous member of a large class of mathematical objects, L-functions, that encode many different arithmetic relationships. By modifying the definition of the Riemann zeta function, mathematicians construct other L-functions that provide more refined information about the primes. For example, the properties of some L-functions measure how many primes below a certain value have a given number as their last digit.

Due to this versatility, L-functions are objects of intense study, and they are central players in a sprawling research vision known as the Langlands program. For now, mathematicians still lack a full theory explaining just what they are.

“There is some big zoo of these things, and for most of them we can’t prove anything at all,” said Nelson.

 

By clicking to watch this video, you agree to our privacy policy.

Video: Alex Kontorovich, professor of mathematics at Rutgers University, breaks down the notoriously difficult Riemann hypothesis in this comprehensive explainer.

Emily Buder/Quanta Magazine; Guan-Huei Wu and Clay Shonkwiler for Quanta Magazine

One piece of that theory involves a generalization of the Lindelöf hypothesis, which predicts that whenever the real part of the complex number input equals $latex \frac{1}{2}$, the output stays small relative to the input for all L-functions (not just the Riemann zeta function).

While mathematicians have chipped away at the Lindelöf hypothesis, they’d only managed scattered progress on something known as the subconvexity problem. Solving that simply amounts to breaking the trivial bound — that is, proving that for any L-function, the output will have less than 25% of the number of digits of the input (multiplied by a quantity called the degree of the L-function). Previously, mathematicians managed to do that for only a few specific families of L-functions (including the Riemann zeta function) and were far from achieving a general result.

But that began to change in the 1990s when mathematicians recognized that merely breaking the trivial bound for general L-functions could lead to advances on different problems, including  questions in an area of research called arithmetic quantum chaos and a question about which integers can be written as sums of three squares.

“People realized in the last 20 to 30 years that there are all these problems that could be solved, provided one could prove this technical-looking statement” about subconvexity, said Emmanuel Kowalski of the Swiss Federal Institute of Technology Zurich.

Nelson was the mathematician who finally did it, after two decades of work that helped him learn how to imagine it.

A Shift in Perspective

In the early 2000s two teams of mathematicians — Joseph Bernstein and Andre Reznikov on one team, and Philippe Michel and Akshay Venkatesh on the other — transformed how mathematicians estimate L-functions. Instead of seeing them merely in arithmetic terms, they created a geometric way to think about the size of their outputs. That work contributed to Venkatesh winning the Fields Medal, math’s highest honor, in 2018.

In this revised picture, the size of an L-function is linked to the size of an integral, called a period, that can be calculated by integrating a function called an automorphic form along a geometric space. This provided mathematicians with more tools they could use to try and break the trivial bound.

“You had more techniques to play with,” said Michel, of the Swiss Federal Institute of Technology Lausanne.

Nelson and Venkatesh collaborated on a 2018 paper that determined which automorphic forms are best for making the kinds of size estimates needed to answer the subconvexity problem. In the following years, Nelson produced two more solo papers on the topic — the first in 2020, the second this past September — that together solved it.

Nelson proved that each L-function satisfies a subconvex bound, meaning its outputs are less than 25% the size of its inputs. He broke the bound by a hair — getting just below 25% for most L-functions — but sometimes that’s all it takes to cross from one world into the next.

“He broke trivial bound, and we are amply satisfied with this. It’s really the breaking of things,” said Michel.

Now mathematicians will march their subconvex bound off to face other problems, maybe even including the Riemann hypothesis one day. That may seem far-fetched right now, but math thrives on hope, and at the very least, Nelson’s new proof has provided that.

Correction: January 13, 2022
A previous version of this article stated that the Riemann hypothesis predicts that the only non-trivial zeroes of the Riemann zeta function occur whenever the real part of is $latex \frac{1}{2}$. It actually predicts nontrivial zeroes occur only when the real part equals $latex \frac{1}{2}$. The article has been updated accordingly.

Correction: January 14, 2022
This article has been updated to emphasize that Nelson solved a version of the subconvexity problem, not the full problem.

Correction: January 18, 2022
A previous version of this article incorrectly included “i”  when discussing the imaginary part of a complex number. The article has been updated accordingly.

Comment on this article