At the Large Hadron Collider in Geneva, physicists shoot protons around a 17-mile track and smash them together at nearly the speed of light. It’s one of the most finely tuned scientific experiments in the world, but when trying to make sense of the quantum debris, physicists begin with a strikingly simple tool called a Feynman diagram that’s not that different from how a child would depict the situation.

Feynman diagrams were devised by Richard Feynman in the 1940s. They feature lines representing elementary particles that converge at a vertex (which represents a collision) and then diverge from there to represent the pieces that emerge from the crash. Those lines either shoot off alone or converge again. The chain of collisions can be as long as a physicist dares to consider.

To that schematic physicists then add numbers, for the mass, momentum and direction of the particles involved. Then they begin a laborious accounting procedure — integrate these, add that, square this. The final result is a single number, called a Feynman probability, which quantifies the chance that the particle collision will play out as sketched.

“In some sense Feynman invented this diagram to encode complicated math as a bookkeeping device,” said Sergei Gukov, a theoretical physicist and mathematician at the California Institute of Technology.

Feynman diagrams have served physics well over the years, but they have limitations. One is strictly procedural. Physicists are pursuing increasingly high-energy particle collisions that require greater precision of measurement — and as the precision goes up, so does the intricacy of the Feynman diagrams that need to be calculated to generate a prediction.

The second limitation is of a more fundamental nature. Feynman diagrams are based on the assumption that the more potential collisions and sub-collisions physicists account for, the more accurate their numerical predictions will be. This process of calculation, known as perturbative expansion, works very well for particle collisions of electrons, where the weak and electromagnetic forces dominate. It works less well for high-energy collisions, like collisions between protons, where the strong nuclear force prevails. In these cases, accounting for a wider range of collisions — by drawing ever more elaborate Feynman diagrams — can actually lead physicists astray.

“We know for a fact that at some point it begins to diverge” from real-world physics, said Francis Brown, a mathematician at the University of Oxford. “What’s not known is how to estimate at what point one should stop calculating diagrams.”

Yet there is reason for optimism. Over the last decade physicists and mathematicians have been exploring a surprising correspondence that has the potential to breathe new life into the venerable Feynman diagram and generate far-reaching insights in both fields. It has to do with the strange fact that the values calculated from Feynman diagrams seem to exactly match some of the most important numbers that crop up in a branch of mathematics known as algebraic geometry. These values are called “periods of motives,” and there’s no obvious reason why the same numbers should appear in both settings. Indeed, it’s as strange as it would be if every time you measured a cup of rice, you observed that the number of grains was prime.

“There is a connection from nature to algebraic geometry and periods, and with hindsight, it’s not a coincidence,” said Dirk Kreimer, a physicist at Humboldt University in Berlin.

Now mathematicians and physicists are working together to unravel the coincidence. For mathematicians, physics has called to their attention a special class of numbers that they’d like to understand: Is there a hidden structure to these periods that occur in physics? What special properties might this class of numbers have? For physicists, the reward of that kind of mathematical understanding would be a new degree of foresight when it comes to anticipating how events will play out in the messy quantum world.

**A Recurring Theme**

Today periods are one of the most abstract subjects of mathematics, but they started out as a more concrete concern. In the early 17th century scientists such as Galileo Galilei were interested in figuring out how to calculate the length of time a pendulum takes to complete a swing. They realized that the calculation boiled down to taking the integral — a kind of infinite sum — of a function that combined information about the pendulum’s length and angle of release. Around the same time, Johannes Kepler used similar calculations to establish the time that a planet takes to travel around the sun. They called these measurements “periods,” and established them as one of the most important measurements that can be made about motion.

Over the course of the 18th and 19th centuries, mathematicians became interested in studying periods generally — not just as they related to pendulums or planets, but as a class of numbers generated by integrating polynomial functions like *x*^{2 }+ 2*x *– 6 and 3*x*^{3 }– 4*x*^{2 }– 2*x *+ 6. For more than a century, luminaries like Carl Friedrich Gauss and Leonhard Euler explored the universe of periods and found that it contained many features that pointed to some underlying order. In a sense, the field of algebraic geometry — which studies the geometric forms of polynomial equations — developed in the 20th century as a means for pursuing that hidden structure.

This effort advanced rapidly in the 1960s. By that time mathematicians had done what they often do: They translated relatively concrete objects like equations into more abstract ones, which they hoped would allow them to identify relationships that were not initially apparent.

This process first involved looking at the geometric objects (known as algebraic varieties) defined by the solutions to classes of polynomial functions, rather than looking at the functions themselves. Next, mathematicians tried to understand the basic properties of those geometric objects. To do that they developed what are known as cohomology theories — ways of identifying structural aspects of the geometric objects that were the same regardless of the particular polynomial equation used to generate the objects.

By the 1960s, cohomology theories had proliferated to the point of distraction — singular cohomology, de Rham cohomology, étale cohomology and so on. Everyone, it seemed, had a different view of the most important features of algebraic varieties.

It was in this cluttered landscape that the pioneering mathematician Alexander Grothendieck, who died in 2014, realized that all cohomology theories were different versions of the same thing.

“What Grothendieck observed is that, in the case of an algebraic variety, no matter how you compute these different cohomology theories, you always somehow find the same answer,” Brown said.

That same answer — the unique thing at the center of all these cohomology theories — was what Grothendieck called a “motive.” “In music it means a recurring theme. For Grothendieck a motive was something which is coming again and again in different forms, but it’s really the same,” said Pierre Cartier, a mathematician at the Institute of Advanced Scientific Studies outside Paris and a former colleague of Grothendieck’s.

Motives are in a sense the fundamental building blocks of polynomial equations, in the same way that prime factors are the elemental pieces of larger numbers. Motives also have their own data associated with them. Just as you can break matter into elements and specify characteristics of each element — its atomic number and atomic weight and so forth — mathematicians ascribe essential measurements to a motive. The most important of these measurements are the motive’s periods. And if the period of a motive arising in one system of polynomial equations is the same as the period of a motive arising in a different system, you know the motives are the same.

“Once you know the periods, which are specific numbers, that’s almost the same as knowing the motive itself,” said Minhyong Kim, a mathematician at Oxford.

One direct way to see how the same period can show up in unexpected contexts is with pi, “the most famous example of getting a period,” Cartier said. Pi shows up in many guises in geometry: in the integral of the function that defines the one-dimensional circle, in the integral of the function that defines the two-dimensional circle, and in the integral of the function that defines the sphere. That this same value would recur in such seemingly different-looking integrals was likely mysterious to ancient thinkers. “The modern explanation is that the sphere and the solid circle have the same motive and therefore have to have essentially the same period,” Brown wrote in an email.

**Feynman’s Arduous Path**

If curious minds long ago wanted to know why values like pi crop up in calculations on the circle and the sphere, today mathematicians and physicists would like to know why those values arise out of a different kind of geometric object: Feynman diagrams.

Feynman diagrams have a basic geometric aspect to them, formed as they are from line segments, rays and vertices. To see how they’re constructed, and why they’re useful in physics, imagine a simple experimental setup in which an electron and a positron collide to produce a muon and an antimuon. To calculate the probability of that result taking place, a physicist would need to know the mass and momentum of each of the incoming particles and also something about the path the particles followed. In quantum mechanics, the path a particle takes can be thought of as the average of all the possible paths it might take. Computing that path becomes a matter of taking an integral, known as a Feynman path integral, over the set of all paths.

Every route a particle collision could follow from beginning to end can be represented by a Feynman diagram, and each diagram has its own associated integral. (The diagram and its integral are one and the same.) To calculate the probability of a specific outcome from a specific set of starting conditions, you consider all possible diagrams that could describe what happens, take each integral, and add those integrals together. That number is the diagram’s amplitude. Physicists then square the magnitude of this number to get the probability.

This procedure is easy to execute for an electron and a positron going in and a muon and an antimuon coming out. But that’s boring physics. The experiments that physicists really care about involve Feynman diagrams with loops. Loops represent situations in which particles emit and then reabsorb additional particles. When an electron collides with a positron, there’s an infinite number of intermediate collisions that can take place before the final muon and antimuon pair emerges. In these intermediate collisions, new particles like photons are created and annihilated before they can be observed. The entering and exiting particles are the same as previously described, but the fact that those unobservable collisions happen can still have subtle effects on the outcome.

“It’s like Tinkertoys. Once you draw a diagram you can connect more lines according to the rules of the theory,” said Flip Tanedo, a physicist at the University of California, Riverside. “You can connect more sticks, more nodes, to make it more complicated.”

By considering loops, physicists increase the precision of their experiments. (Adding a loop is like calculating a value to a greater number of significant digits). But each time they add a loop, the number of Feynman diagrams that need to be considered — and the difficulty of the corresponding integrals — goes up dramatically. For example, a one-loop version of a simple system might require just one diagram. A two-loop version of the same system needs seven diagrams. Three loops demand 72 diagrams. Increase it to five loops, and the calculation requires around 12,000 integrals — a computational load that can literally take years to resolve.

Rather than chugging through so many tedious integrals, physicists would love to gain a sense of the final amplitude just by looking at the structure of a given Feynman diagram — just as mathematicians can associate periods with motives.

“This procedure is so complex and the integrals are so hard, so what we’d like to do is gain insight about the final answer, the final integral or period, just by staring at the graph,” Brown said.

**A Surprising Connection**

Periods and amplitudes were presented together for the first time in 1994 by Kreimer and David Broadhurst, a physicist at the Open University in England, with a paper following in 1995. The work led mathematicians to speculate that all amplitudes were periods of mixed Tate motives — a special kind of motive named after John Tate, emeritus professor at Harvard University, in which all the periods are multiple values of one of the most influential constructions in number theory, the Riemann zeta function. In the situation with an electron-positron pair going in and a muon-antimuon pair coming out, the main part of the amplitude comes out as six times the Riemann zeta function evaluated at three.

If all amplitudes were multiple zeta values, it would give physicists a well-defined class of numbers to work with. But in 2012 Brown and his collaborator Oliver Schnetz proved that’s not the case. While all the amplitudes physicists come across today may be periods of mixed Tate motives, “there are monsters lurking out there that throw a spanner into the works,” Brown said. Those monsters are “certainly periods, but they’re not the nice and simple periods people had hoped for.”

What physicists and mathematicians do know is that there seems to be a connection between the number of loops in a Feynman diagram and a notion in mathematics called “weight.” Weight is a number related to the dimension of the space being integrated over: A period integral over a one-dimensional space can have a weight of 0, 1 or 2; a period integral over a two-dimensional space can have weight up to 4, and so on. Weight can also be used to sort periods into different types: All periods of weight 0 are conjectured to be algebraic numbers, which can be the solutions to polynomial equations (this has not been proved); the period of a pendulum always has a weight of 1; pi is a period of weight 2; and the weights of values of the Riemann zeta function are always twice the input (so the zeta function evaluated at 3 has a weight of 6).

This classification of periods by weights carries over to Feynman diagrams, where the number of loops in a diagram is somehow related to the weight of its amplitude. Diagrams with no loops have amplitudes of weight 0; the amplitudes of diagrams with one loop are all periods of mixed Tate motives and have, at most, a weight of 4. For graphs with additional loops, mathematicians suspect the relationship continues, even if they can’t see it yet.

“We go to higher loops and we see periods of a more general type,” Kreimer said. “There mathematicians get really interested because they don’t understand much about motives that are not mixed Tate motives.”

Mathematicians and physicists are currently going back and forth trying to establish the scope of the problem and craft solutions. Mathematicians suggest functions (and their integrals) to physicists that can be used to describe Feynman diagrams. Physicists produce configurations of particle collisions that outstrip the functions mathematicians have to offer. “It’s quite amazing to see how fast they’ve assimilated quite technical mathematical ideas,” Brown said. “We’ve run out of classical numbers and functions to give to physicists.”

**Nature’s Groups**

Since the development of calculus in the 17th century, numbers arising in the physical world have informed mathematical progress. Such is the case today. The fact that the periods that come from physics are “somehow God-given and come from physical theories means they have a lot of structure and it’s structure a mathematician wouldn’t necessarily think of or try to invent,” said Brown.

Adds Kreimer, “It seems so that the periods which nature wants are a smaller set than the periods mathematics can define, but we cannot define very cleanly what this subset really is.”

Brown is looking to prove that there’s a kind of mathematical group — a Galois group — acting on the set of periods that come from Feynman diagrams. “The answer seems to be yes in every single case that’s ever been computed,” he said, but proof that the relationship holds categorically is still in the distance. “If it were true that there were a group acting on the numbers coming from physics, that means you’re finding a huge class of symmetries,” Brown said. “If that’s true, then the next step is to ask why there’s this big symmetry group and what possible physics meaning could it have.”

Among other things, it would deepen the already provocative relationship between fundamental geometric constructions from two very different contexts: motives, the objects that mathematicians devised 50 years ago to understand the solutions to polynomial equations, and Feynman diagrams, the schematic representation of how particle collisions play out. Every Feynman diagram has a motive attached to it, but what exactly the structure of a motive is saying about the structure of its related diagram remains anyone’s guess.

*This article was reprinted on Wired.com.*

It's lucky I checked, because when I saw "motive" linked to music, I thought somebody had misheard "motif". Then I found both were valid, though I doubt I shall give up on "motif".

my first consideration to this is that these particle collisions do not seem random.

Fractal geometries within subsets…

It will be interesting to watch where this goes in relation to random matrices and heavy atoms. I remeber reading some where once that the zeros of the Riemann Zeta Function and the traces of random matrices descriping heavy atoms agree very strongly after a million or so. Crazy to think of what the link could be.

A little over 3 years ago, Quanta ran an article about the amplituhedron, which also presented a potential simplification of Feynman Diagrams:

www.quantamagazine.org/20130917-a-jewel-at-the-heart-of-quantum-physics/

Are these two accounts equivalent? Or is this article describing an entirely different approach?

Neil, My understanding is that the amplituhedron is a way of organizing the integrand – it gives a beautiful, geometric interpretation of a differential form, but then you still have to compute the integrals. This work connecting periods and amplitudes is about finding a better way to actually compute the integrals. So, it's fair to say that the amplituhedron and this work are not really talking about the same thing.

I'd also be interested to know if there's any known connection with the Amplituhedron that Quanta previously reported on.

Kevin is right that there is no direct connection: the Amplituhedron is the structure of the integrand in a particular theory, the motivic structure is what happens after integration for most/all theories. That said, Nima Arkani-Hamed is working on something related to this: there are some interesting Amplituhedron-like constructions at one loop that integrate in a particularly clean way.

Amplituhedra and motives? — Birds of a feather.

* Bet the aviary on it. *

Thanks for this great article. Please keep the focus here on hard science, HEP, math, etc. the education articles really were a distraction.

I should've refreshed before posting my comment, but thank you both 🙂

These findings remind me strongly of the novel by Hermann Hesse called "Das Glasperlenspiel" ("The Glass Beadgame"). The Beadgame was played on a sketchily described keyboard instrument that, in the hands of a skilled player, had the capability of translating concepts in one knowledge domain, e.g. music, into another domain, e.g. philosophy, mathematics, or poetry and then displaying the result on a large screen. Underlying these domains were not just analogies ("like"), but actual homologies ("is equivalent to"). The emerging identities described in this article between "periods of motives" and Feynman diagrams seem to be just this sort of relationship.

Can we have the lead image in wallpaper resolution please??

@Yatima,

Hello! Glad you like the lead image. If you click on it, a lightbox holding a 1920×1080 version will expand. We are thinking about having wallpapers in the future, but for now, most of our images are formatted this way — larger sizes available on click.

Out of all contained in this copious and brilliant article, I am interested in the progress made on defining a group for these outcomes. Group Theory has given us outstanding classification methods in Particle Physics and predictive ability of "missing particles" later discovered. Apparently there is a Greater Matrix of outcomes which I hope converges to a number with a limit rather than another singularity (infinite number).

What ever happened with this approach?

"Artist’s rendering of the amplituhedron, a newly discovered mathematical object resembling a multifaceted jewel in higher dimensions. Encoded in its volume are the most basic features of reality that can be calculated — the probabilities of outcomes of particle interactions"

https://www.quantamagazine.org/20130917-a-jewel-at-the-heart-of-quantum-physics/

Thanks @Olena

Fantastic article by Kevin Hartnett. I am so glad you took the time to actually explain ( and very lucidly ) some of the concepts involved here: "algebraic geometry" in particular.

Thank you, Kevin.

Since I'm not a Physician or a math researcher it's quite difficult to follow the concepts and be sure that I really have understood the article. Although I think I'm!!

Please Clay don't misunderstood my point. I agree with you about "hard science". But some easiness for the "laicos" are very welcomed.

Regards!

I applaud the effort of the writer to try to tackle the difficult subject of motives. Truth be told it is easier to teach a bright middle school kid who knows calculus quantum mechanics than it is to teach an undergrad who has had a first course in algebraic varieties (at the level of Shafarevich) enough algebraic geometry to understand the search for motives. Unlike in algebraic topology where the Eilenberg-Steenrod axioms capture the cohomology theories, the situation in algebraic geometry is extremely intricate with various cohomology theories (like etale l-adic cohomology) have their own structures and connections to other cohomology theories.

One thing though:

>Motives are in a sense the fundamental building blocks of polynomial equations, in the same way that prime factors are the elemental pieces of larger numbers.

This is very misleading.

As much as this science & physics will lead on to the next "thing" – I look at it like putting a human being through a meat grinder, analysing what's left to identify why the subject liked watching Coronation St.

What is the function of time's square?

f=ma a=distance/t2

e=mc2 c=light year2/year(time)2

Some of the biggest discoveries in physics involve time's square, but it is usually redundantly defined or based on Earth-centric measurements…

I couldn't help noticing a lot of squares in the algebraic geometry equations. Likewise, periods are "over time."

@Paul: Yes – it may seem strange but still true that humans, ground or in one piece do not matter when trying to understand what they like or dislike to watch or not. The existence of both (anything to watch and someone/something to watch) are both independent events. The unitary and the locality principles as explained here ( https://www.quantamagazine.org/20160428-entanglement-made-simple/) might help clear that chasm.

What's the difference between squaring a number and squaring "the magnitude of this number"?

@Paul lol ! I really liked that comparison lol.

I don't agree, but get your point I guess.

This is all fine and good: a correlation. I'm even admitting that the correlation could be entirely predictive however how do you explain the conceptual linkage between the two systems once you find its a great predictor. We poke a little fun at some of the conceptual leaps the string theorists have made over the years, but it would be unprecedented for physics to just say "We have NO idea how it works, only that its predictive."

I personally think the answer is in the other direction (though findings like this catalyze the discussion toward what I'm talking about) to increase our knowledge of the fundamentals of how our systems work, rather than to move in this direction because at least getting more literal would result in a result that we could live with.

"Vector space collapse" hypothesizes that the information contained in an 11-dimentional vector could realistically be mapped at full fidelity to only 5. Even though this is entirely hypothetical right now, at least we understand the operative principles as to how this might work. This is anyone's guess how this actually works.

The same goes for an amplihedron. OK…let's say it works….how does it work? And if you can answer that then why mess around with this approach. My point being that if you understand the "why" or something you can always explain the "how", but the reverse is not always so.

My opinion.