[No Caption]

Wylie Beckert for Quanta Magazine

With a surprising new proof, two young mathematicians have found a bridge across the finite-infinite divide, helping at the same time to map this strange boundary.

The boundary does not pass between some huge finite number and the next, infinitely large one. Rather, it separates two kinds of mathematical statements: “finitistic” ones, which can be proved without invoking the concept of infinity, and “infinitistic” ones, which rest on the assumption — not evident in nature — that infinite objects exist.

Mapping and understanding this division is “at the heart of mathematical logic,” said Theodore Slaman, a professor of mathematics at the University of California, Berkeley. This endeavor leads directly to questions of mathematical objectivity, the meaning of infinity and the relationship between mathematics and physical reality.

More concretely, the new proof settles a question that has eluded top experts for two decades: the classification of a statement known as “Ramsey’s theorem for pairs,” or \(RT_2^2\). Whereas almost all theorems can be shown to be equivalent to one of a handful of major systems of logic — sets of starting assumptions that may or may not include infinity, and which span the finite-infinite divide — \(RT_2^2\) falls between these lines. “This is an extremely exceptional case,” said Ulrich Kohlenbach, a professor of mathematics at the Technical University of Darmstadt in Germany. “That’s why it’s so interesting.”

In the new proof, Keita Yokoyama, 34, a mathematician at the Japan Advanced Institute of Science and Technology, and Ludovic Patey, 27, a computer scientist from Paris Diderot University, pin down the logical strength of \(RT_2^2\) — but not at a level most people expected. The theorem is ostensibly a statement about infinite objects. And yet, Yokoyama and Patey found that it is “finitistically reducible”: It’s equivalent in strength to a system of logic that does not invoke infinity. This result means that the infinite apparatus in \(RT_2^2\) can be wielded to prove new facts in finitistic mathematics, forming a surprising bridge between the finite and the infinite. “The result of Patey and Yokoyama is indeed a breakthrough,” said Andreas Weiermann of Ghent University in Belgium, whose own work on \(RT_2^2\) unlocked one step of the new proof.

Courtesy of Ludovic Patey and Keita Yokohama

Ludovic Patey, left, and Keita Yokoyama co-authored a proof giving the long-sought classification of Ramsey’s theorem for pairs.

Ramsey’s theorem for pairs is thought to be the most complicated statement involving infinity that is known to be finitistically reducible. It invites you to imagine having in hand an infinite set of objects, such as the set of all natural numbers. Each object in the set is paired with all other objects. You then color each pair of objects either red or blue according to some rule. (The rule might be: For any pair of numbers A < B, color the pair blue if B < 2A, and red otherwise.) When this is done, \(RT_2^2\) states that there will exist an infinite monochromatic subset: a set consisting of infinitely many numbers, such that all the pairs they make with all other numbers are the same color. (Yokoyama, working with Slaman, is now generalizing the proof so that it holds for any number of colors.)

The colorable, divisible infinite sets in \(RT_2^2\) are abstractions that have no analogue in the real world. And yet, Yokoyama and Patey’s proof shows that mathematicians are free to use this infinite apparatus to prove statements in finitistic mathematics — including the rules of numbers and arithmetic, which arguably underlie all the math that is required in science — without fear that the resulting theorems rest upon the logically shaky notion of infinity. That’s because all the finitistic consequences of \(RT_2^2\) are “true” with or without infinity; they are guaranteed to be provable in some other, purely finitistic way. \(RT_2^2\)’s infinite structures “may make the proof easier to find,” explained Slaman, “but in the end you didn’t need them. You could give a kind of native proof — a [finitistic] proof.”

When Yokoyama set his sights on \(RT_2^2\) as a postdoctoral researcher four years ago, he expected things to turn out differently. “To be honest, I thought actually it’s not finitistically reducible,” he said.

Lucy Reading-Ikkanda for Quanta Magazine

This was partly because earlier work proved that Ramsey’s theorem for triples, or \(RT_2^3\), is not finitistically reducible: When you color trios of objects in an infinite set either red or blue (according to some rule), the infinite, monochrome subset of triples that \(RT_2^3\) says you’ll end up with is too complex an infinity to reduce to finitistic reasoning. That is, compared to the infinity in \(RT_2^2\), the one in \(RT_2^3\) is, so to speak, more hopelessly infinite.

Even as mathematicians, logicians and philosophers continue to parse the subtle implications of Patey and Yokoyama’s result, it is a triumph for the “partial realization of Hilbert’s program,” an approach to infinity championed by the mathematician Stephen Simpson of Vanderbilt University. The program replaces an earlier, unachievable plan of action by the great mathematician David Hilbert, who in 1921 commanded mathematicians to weave infinity completely into the fold of finitistic mathematics. Hilbert saw finitistic reducibility as the only remedy for the skepticism then surrounding the new mathematics of the infinite. As Simpson described that era, “There were questions about whether mathematics was going into a twilight zone.”

The Rise of Infinity

The philosophy of infinity that Aristotle set out in the fourth century B.C. reigned virtually unchallenged until 150 years ago. Aristotle accepted “potential infinity” — the promise of the number line (for example) to continue forever — as a perfectly reasonable concept in mathematics. But he rejected as meaningless the notion of “actual infinity,” in the sense of a complete set consisting of infinitely many elements.

Aristotle’s distinction suited mathematicians’ needs until the 19th century. Before then, “mathematics was essentially computational,” said Jeremy Avigad, a philosopher and mathematician at Carnegie Mellon University. Euclid, for instance, deduced the rules for constructing triangles and bisectors — useful for bridge building — and, much later, astronomers used the tools of “analysis” to calculate the motions of the planets. Actual infinity — impossible to compute by its very nature — was of little use. But the 19th century saw a shift away from calculation toward conceptual understanding. Mathematicians started to invent (or discover) abstractions — above all, infinite sets, pioneered in the 1870s by the German mathematician Georg Cantor. “People were trying to look for ways to go further,” Avigad said. Cantor’s set theory proved to be a powerful new mathematical system. But such abstract methods were controversial. “People were saying, if you’re giving arguments that don’t tell me how to calculate, that’s not math.”

And, troublingly, the assumption that infinite sets exist led Cantor directly to some nonintuitive discoveries. He found that infinite sets come in an infinite cascade of sizes — a tower of infinities with no connection to physical reality. What’s more, set theory yielded proofs of theorems that were hard to swallow, such as the 1924 Banach-Tarski paradox, which says that if you break a sphere into pieces, each composed of an infinitely dense scattering of points, you can put the pieces together in a different way to create two spheres that are the same size as the original. Hilbert and his contemporaries worried: Was infinitistic mathematics consistent? Was it true?

Amid fears that set theory contained an actual contradiction — a proof of 0 = 1, which would invalidate the whole construct — math faced an existential crisis. The question, as Simpson frames it, was, “To what extent is mathematics actually talking about anything real? [Is it] talking about some abstract world that’s far from the real world around us? Or does mathematics ultimately have its roots in reality?”


Amid questions over the consistency of infinitistic mathematics, the great German mathematician David Hilbert called upon his colleagues to prove that it rested upon solid, finitistic logical foundations.

Even though they questioned the value and consistency of infinitistic logic, Hilbert and his contemporaries did not wish to give up such abstractions — power tools of mathematical reasoning that in 1928 would enable the British philosopher and mathematician Frank Ramsey to chop up and color infinite sets at will. “No one shall expel us from the paradise which Cantor has created for us,” Hilbert said in a 1925 lecture. He hoped to stay in Cantor’s paradise and obtain proof that it stood on stable logical ground. Hilbert tasked mathematicians with proving that set theory and all of infinitistic mathematics is finitistically reducible, and therefore trustworthy. “We must know; we will know!” he said in a 1930 address in Königsberg — words later etched on his tomb.

However, the Austrian-American mathematician Kurt Gödel showed in 1931 that, in fact, we won’t. In a shocking result, Gödel proved that no system of logical axioms (or starting assumptions) can ever prove its own consistency; to prove that a system of logic is consistent, you always need another axiom outside of the system. This means there is no ultimate set of axioms — no theory of everything — in mathematics. When looking for a set of axioms that yield all true mathematical statements and never contradict themselves, you always need another axiom. Gödel’s theorem meant that Hilbert’s program was doomed: The axioms of finitistic mathematics cannot even prove their own consistency, let alone the consistency of set theory and the mathematics of the infinite.

This might have been less worrying if the uncertainty surrounding infinite sets could have been contained. But it soon began leaking into the realm of the finite. Mathematicians started to turn up infinitistic proofs of concrete statements about natural numbers — theorems that could conceivably find applications in physics or computer science. And this top-down reasoning continued. In 1994, Andrew Wiles used infinitistic logic to prove Fermat’s Last Theorem, the great number theory problem about which Pierre de Fermat in 1637 cryptically claimed, “I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.” Can Wiles’ 150-page, infinity-riddled proof be trusted?

With such questions in mind, logicians like Simpson have maintained hope that Hilbert’s program can be at least partially realized. Although not all of infinitistic mathematics can be reduced to finitistic reasoning, they argue that the most important parts can be firmed up. Simpson, an adherent of Aristotle’s philosophy who has championed this cause since the 1970s (along with Harvey Friedman of Ohio State University, who first proposed it), estimates that some 85 percent of known mathematical theorems can be reduced to finitistic systems of logic. “The significance of it,” he said, “is that our mathematics is thereby connected, via finitistic reducibility, to the real world.”

An Exceptional Case

Almost all of the thousands of theorems studied by Simpson and his followers over the past four decades have turned out (somewhat mysteriously) to be reducible to one of five systems of logic spanning both sides of the finite-infinite divide. For instance, Ramsey’s theorem for triples (and all ordered sets with more than three elements) was shown in 1972 to belong at the third level up in the hierarchy, which is infinitistic. “We understood the patterns very clearly,” said Henry Towsner, a mathematician at the University of Pennsylvania. “But people looked at Ramsey’s theorem for pairs, and it blew all that out of the water.”

A breakthrough came in 1995, when the British logician David Seetapun, working with Slaman at Berkeley, proved that \(RT_2^2\) is logically weaker than \(RT_2^3\) and thus below the third level in the hierarchy. The breaking point between \(RT_2^2\) and \(RT_2^3\) comes about because a more complicated coloring procedure is required to construct infinite monochromatic sets of triples than infinite monochromatic sets of pairs.

Lucy Reading-Ikkanda for Quanta Magazine

“Since then, many seminal papers regarding \(RT_2^2\) have been published,” said Weiermann — most importantly, a 2012 result by Jiayi Liu (paired with a result by Carl Jockusch from the 1960s) showed that \(RT_2^2\) cannot prove, nor be proved by, the logical system located at the second level in the hierarchy, one rung below \(RT_2^3\). The level-two system is known to be finitistically reducible to “primitive recursive arithmetic,” a set of axioms widely considered the strongest finitistic system of logic. The question was whether \(RT_2^2\) would also be reducible to primitive recursive arithmetic, despite not belonging at the second level in the hierarchy, or whether it required stronger, infinitistic axioms. “A final classification of \(RT_2^2\) seemed out of reach,” Weiermann said.

But then in January, Patey and Yokoyama, young guns who have been shaking up the field with their combined expertise in computability theory and proof theory, respectively, announced their new result at a conference in Singapore. Using a raft of techniques, they showed that \(RT_2^2\) is indeed equal in logical strength to primitive recursive arithmetic, and therefore finitistically reducible.

“Everybody was asking them, ‘What did you do, what did you do?’” said Towsner, who has also worked on the classification of \(RT_2^2\) but said that “like everyone else, I did not get far.” “Yokoyama is a very humble guy. He said, ‘Well, we didn’t do anything new; all we did was, we used the method of indicators, and we used this other technique,’ and he proceeded to list off essentially every technique anyone has ever developed for working on this sort of problem.”

In one key step, the duo modeled the infinite monochromatic set of pairs in \(RT_2^2\) using a finite set whose elements are “nonstandard” models of the natural numbers. This enabled Patey and Yokoyama to translate the question of the strength of \(RT_2^2\) into the size of the finite set in their model. “We directly calculate the size of the finite set,” Yokoyama said, “and if it is large enough, then we can say it’s not finitistically reducible, and if it’s small enough, we can say it is finitistically reducible.” It was small enough.

\(RT_2^2\) has numerous finitistic consequences, statements about natural numbers that are now known to be expressible in primitive recursive arithmetic, and which are thus certain to be logically consistent. Moreover, these statements — which can often be cast in the form “for every number X, there exists another number Y such that … ” — are now guaranteed to have primitive recursive algorithms associated with them for computing Y. “This is a more applied reading of the new result,” said Kohlenbach. In particular, he said, \(RT_2^2\) could yield new bounds on algorithms for “term rewriting,” placing an upper limit on the number of times outputs of computations can be further simplified.

Some mathematicians hope that other infinitistic proofs can be recast in the \(RT_2^2\) language and shown to be logically consistent. A far-fetched example is Wiles’ proof of Fermat’s Last Theorem, seen as a holy grail by researchers like Simpson. “If someone were to discover a proof of Fermat’s theorem which is finitistic except for involving some clever applications of \(RT_2^2\),” he said, “then the result of Patey and Yokoyama would tell us how to find a purely finitistic proof of the same theorem.”

Simpson considers the colorable, divisible infinite sets in \(RT_2^2\) “convenient fictions” that can reveal new truths about concrete mathematics. But, one might wonder, can a fiction ever be so convenient that it can be thought of as a fact? Does finitistic reducibility lend any “reality” to infinite objects — to actual infinity? There is no consensus among the experts. Avigad is of two minds. Ultimately, he says, there is no need to decide. “There’s this ongoing tension between the idealization and the concrete realizations, and we want both,” he said. “I’m happy to take mathematics at face value and say, look, infinite sets exist insofar as we know how to reason about them. And they play an important role in our mathematics. But at the same time, I think it’s useful to think about, well, how exactly do they play a role? And what is the connection?”

With discoveries like the finitistic reducibility of \(RT_2^2\) — the longest bridge yet between the finite and the infinite — mathematicians and philosophers are gradually moving toward answers to these questions. But the journey has lasted thousands of years already, and seems unlikely to end anytime soon. If anything, with results like \(RT_2^2\), Slaman said, “the picture has gotten quite complicated.”

This article was reprinted on Wired.com.

View Reader Comments (28)

Leave a Comment

Reader CommentsLeave a Comment

  • As far as I can tell, both of Ramsey's theorems state that there's an infinite set of monochromatic pairs/triples. Is the difference between them that one set is countably infinite and the other uncountable? Or is the difference more subtle?

  • Not exactly. Ramsey's theorem for pairs is based on a logical system that does not invoke the concept of infinity, whereas the theorem for triples is founded on a logical system that relies on the concept of infinity. So it isn't really about the cardinality of the sets of pairs/triples, but the logical foundations of the theorems.

  • Something that might help to understand what's going on here is to start one level lower: Ramsey's theorem for singletons (RT^1_2) says that however you colour the integers with two colours (say red and blue), you are guaranteed to find an infinite monochromatic subset. To see this is true, simply go along the integers starting from 1 and put them into the red or the blue bag according to their colour. Since in each step you increase the size of one or the other bag, without removing anything, you end up with an infinite set. This is a finitistic proof: it never really uses infinity, but it tells you how to construct the first part of the 'infinite set'.

    Now let's try the standard proof for RT^2_2, pairs. This time we will go along the integers twice, and we will throw away a lot as we go.

    The first time, we start at 1. Because there are infinitely many numbers bigger than 1, each of which makes a pair with 1 and each of which pairs is coloured either red or blue, there are either infinitely many red pairs with 1 or infinitely many blue pairs (note: this is really using RT_2^1). I write down under 1 'red' or 'blue' depending on which it turned out to be (in case both sets of pairs are infinite, I'll write red just to break a tie), then I cross out all the numbers bigger than 1 which make the 'wrong colour' pair with 1.

    Now I move on to the next number, say s, I didn't cross out, and I look at all the pairs it makes with the un-crossed-out numbers bigger than it. There are still infinitely many, so either the red pairs or the blue pairs form an infinite set (or both). I write down red or blue below s as before, and again cross out all the number bigger than s which make a wrong colour pair with s. And I keep going like this; because everything stays infinite I never get stuck.

    After an infinitely long time, I can go back and look at all the numbers which I did not cross out – there is an infinite list of them. Under each is written either 'red' or 'blue', and if under (say) number t the word 'red' is written, then t forms red pairs with all the un-crossed-out numbers bigger than t. Now (using RT^1_2 again) either the word 'red' or the word 'blue' was written infinitely often, so I can pick an infinite set of numbers under which I wrote either always 'red' or always 'blue'. Suppose it was always 'red'; then if s and t are any two numbers in the collection I picked, the pair st will be red – this is because one of s and t, say s, is smaller, and by construction all the pairs from s to bigger un-crossed-out numbers, including t, are red. If it were always blue, by the same argument I get an infinite set where all pairs are blue.

    What is different here to the first case? The difference is that in order to say whether I should write 'red' or 'blue' under 1 (or any other number) in the first step, I have to 'see' the whole infinite set. I could look at a lot of these numbers and make a guess – but if the guess turns out to be wrong then it means I made a mistake at all the later steps of the process too; everything falls apart. This is not a finitistic proof – according to some logicians, you should be worried that it might somehow be wrong. Most mathematicians will say it is perfectly fine though.

    Moving up to RT^3_2, the usual proof is an argument that looks quite a lot like the RT^2_2 argument, except that instead of using RT^1_2 in the 'first pass' it uses RT^2_2. All fine; we believe RT^2_2, so no problem. But now, when you want to write down 'red' or 'blue under 1 in this 'first pass' you have to know something more complicated about all the triples using 1; you want to know if you can find an infinite set S such that any pair s,t in S forms a red triple with 1. If not, RT^2_2 tells you that you can find an infinite set S such that any pair s,t in S forms a _blue_ triple with 1. Then you would cross off everything not in S, and keep going as with RT^2_2. The proof doesn't really get any harder for the general case RT^k_2 (or indeed changing the number of colours to something bigger than 2). If you're happy with infinity, there's nothing new to see here. If not – well, these proofs have you recursively using more and infinitely more appeals to something infinite as you increase k, which is not a happy place to be in if you don't like infinity.

  • Excellent article.

    The power of logic/mathematics never ceases to amaze, and the fact that results in the infinite domain have an impact on finite mathematics (which many would say can be considered far more real and in touch with reality) might just mean that Aristotle wasn't right when it came to "actual infinity" being non-existent

  • I feel this article could be improved in some ways, as I couldn't understand several points. (I am a math professor with basically no background in logic. Ha!) Fortunately, Peter's comment above clarifies the picture greatly. I hate to give negative feedback – I typically enjoy the articles by Quanta- but here are some specific points in the article which I found confusing.

    For example, the author writes
    "Rather, it separates two kinds of mathematical statements: “finitistic” ones, which can be proved without invoking the concept of infinity, and “infinitistic” ones, which rest on the assumption — not evident in nature — that infinite objects exist."

    It would have been nice to have given a concrete simple example that illustrates the divide, especially since this is the main point of the article. I think Peter's comment above does so. (Incidentally, I tried googling "finitistically reducible" but just found technical articles, couldn't find, for example, a wikipedia page).

    Also, it would have been nice if the author, after describing what Ramsey's theorem for pairs says, emphasized that this result was proven (by presumably Ramsey, but I don't know) long ago, and that is not what is new. From what I gather, what seems to be new is a claim that one can prove the theorem for pairs with reduced demands on invoking infinity (the author does convey throughout that this is the point, although again, the lack of an example of making less demands on infinity made me struggle).

    Also, he/she writes "When you color trios of objects in an infinite set either red or blue (according to some rule), the infinite, monochrome subset of triples that RT32 says you’ll end up with is too complex an infinity to reduce to finitistic reasoning." Hmm, I'm not sure that it is the monochromatic set itself which is "too complex an infinity." For example, if the original infinite set is countable, then the monochromatic subset is also countably infinite – the simplest infinity, As far as I understand, the claim is that the inner workings of any proof that constructs such a monochromatic set must invoke certain appeals to infinity. The first comment, by RCJG, reflects this confusion.

    Also, regarding the illustration, the rules (e.g. B < 2^A) for coloring the edges are a red herring in that they introduce an irrelevant layer of complexity – it should be enough to say "color the edges any way you wish."

    Again, I am not an expert in the area, so please take my comments with a grain of salt – I may have completely misunderstood what is going on.

  • Peter, many thanks for the excellent comment. I'm amazed you managed to put these proofs into words.

    I wanted to add a few sentences about an aspect of this subject that I find fascinating, but which is somewhat tangential to the discussion of RT22. That is, the debate over whether the mathematics of the infinite is indispensable for doing physics. It's an open question that has been raging since Hilbert's day, with deep consequences either way.

    A lot of the math that physicists make use of to describe the universe (such as calculus) is formalizable in a logical system called predicativism, which invokes "actual infinity." But are the infinitistic features of predicativism actually required for doing physics, or can the math used in physics always be reformulated in a way that avoids actual infinity, and relies only on finite axioms?

    If actual infinity is indispensable for explaining the universe, this would seem to imbue infinity with some “reality,” despite infinite objects not seeming to directly manifest themselves in the physical world. And in that case, “infinite objects exist” might be justified as a logical axiom in mathematics, and we can stop worrying about all of this. On the other hand, infinitistic math might not really be required for explaining the world. Simpson points out that calculus is finitistically reducible, and says, "I believe that most or all of the mathematics used in physics can be recast in a formal system which is finitistically reducible."

    In any case, this situation highlights the nonlinear relationship between math and physics that I find eternally fascinating.

  • @Natalie Wolchover, You made my perceived need for comment unnecessary except to say, 'thank you'.

  • God bless you Peter, Alex, and Natalie.

    Alex, I believe the reason for having a rule is to avoid problems with the axiom of choice–axiomatically allowing a computer to color the pairs however it wishes is exactly what we need to prove the Banach-Tarski paradox, and would not really make much sense in a system where the numbers cannot even be considered as an infinite whole.

    That said, the actual rules pictured do make a bit of a red herring, and it isn't clearly specified in the text that the theorem holds for much simpler rules like "red if A+B is even."

    That said, the I love the rule in the picture for RT^3_2

  • That's a very interesting article. Two minor points though,

    1. "However, the Austrian-American mathematician Kurt Gödel showed in 1931 that, in fact, we won’t."

    Gödel proved the two main results in 1930, and announced the first theorem at the very Königsberg symposium where Hilbert gave his 1930 address! See here,


    The famous article itself was published in 1931.

    2. "In a shocking result, Gödel proved that no system of logical axioms (or starting assumptions) can ever prove its own consistency;"

    This isn't quite right, as it omits "consistent". Any *inconsistent* theory can prove its own consistency! So long as it can express it, of course. This is because if S is an inconsistent theory, then S proves every sentence; including Con(S), where Con(S) is the result of translating the English sentence "There is no derivation of the sentence 0=1 in the theory S" into the language of arithmetic. So, if S is inconsistent, S proves Con(S).

    A better formulation of result should say: "no consistent system of logical axioms containing enough arithmetic can ever prove its own consistency".

    Formally, it looks like this: Suppose S is a consistent recursively axiomatizable theory containing Peano Arithmetic. Then S does not prove Con(S).

    (The "containing Peano Arithmetic" bit can be weakened as well, but this gets into some really fussy details, which are closely connected to the results in the paper described.)

  • Great article. How you guys manage to explain these questions in almost layman's language is really incredibly brilliant.

  • This "surprising new proof" which involves the gap/bridge (?) between finite/infinite has indeed a special intelectual flavour and beauty. The human mind possesses an unknowable complexity (for us), and so does the known universe. Notions which are (apparently) physically untouchable (or even sacred) may be felt by special senses or by changing view: "…infinite structures may make proof easier to find, but in the end you didn't need them. You could give a kind of native (finitistic) proof". Real crack!
    Anyway, I believe that if we think only to "algorithmic constructivism" in order to model the real world, "something as large as needed, but finite" may fairly denote the infinite.

  • Together with Carlos Di Prisco, I have since long ago investigated the combinatorial contents of Ramsey-type problems, and we've shown in the paper below that there is a new partition principles (the so-called "Principle of Ariadne") which is not only independent of the axiom of choice, but is consistent with usual axioms of ZF set theory. This helps to understand, if not the finite-infinite divide, that there are more than one way to generalize finite principles of order (or choice) to the
    Infinite. In other words, several,infinite principles of choice are possible, with the same finite content.

    W. A. Carnielli and C. A. Di Prisco. Some results on polarized partition relations of higher dimension. Mathematical Logic Quarterly 39 (1993):461–474.

  • "Does finitistic reducibility lend any “reality” to infinite objects — to actual infinity? There is no consensus among the experts." No…This is basically a form of geometry branching into mathematics. How is saying that line segments have an infinite number of points shed light onto a line? It simply doesn't. I find -1/12 and 1/2 far more interesting when it comes to infinity.

  • @Natalie Wolchover. As others have said, THANK YOU for saying what we were thinking, and probably saying better than we could have 🙂

    I particularly love the boundary between Math and Physics.

    It's especially intriguing, to my mind, when you start looking at the quantum world. At the Planck scale, presumably there is no "going smaller", even in spacetime… That might translate into "goodbye infininitesimals" – which perhaps means "goodbye to infinity" as well?

    And yet Quantum Mechanics is formulated using complex numbers, a set that is not only infinite but uncountable…

  • I do not know about mathematics, but as far as physics is concerned, the connection between discrete and continuum is most fundamental. Particle mechanics and wave mechanics are different theories. We do not know much about their connection. If any connection, they are certainly confusing at best.

    This problem manifests most seriously in Quantum Mechanics as the so called "wave-particle duality". This duality certainly causes a lot of questions. Niels Bohr spent all of his life as a physicist to investigate these problems. I do not know what happened since then. Nobody talks about it anymore. Are these issues all resolved?

    Schrodinger mixed relativistic wave theory of de Broglie and particle dynamics of Hamilton to get wave equations. He himself did not understand what it meant ontologically. von Neuman first quantized Schrodinger's equation to get eigenvector analysis of particles. Dirac second quantized Schrodinger's wave equation to get particle dual of Schrodinger's waves. This multi-layers of duality between particles and waves makes QM next to impossible to have clear ontology.

    This obscurity of discrete and continuum in QM creates a mystery which goes as follows: The Uncertainty Principle asserts that we can not localize a particle twice. So, there must be no trajectory. But all particle physics experiment in accelerators produce trajectories in the particle detecting chamber.

    I think the connection between physics and mathematics is very important. As all mathematical logicians agree, all mathematical theories must be relevant to reality. It is good thing that modern mathematicians started to understand the importance of this connection between physics and mathematics.

    Final question. Is there any theory of physics which are purely finitistic or discrete? Certainly QM is not!

  • From Koryzbski, father of general symantics:
    Why has mathematizing proved to be at each historical period the most excellent human activity, producing results of such enormous importance and unexpected validity as not to be comparable to with any other of the musings of man? … the mathematical method and structure … it is perhaps the easiest, or simplest activity; and, therefore, it has been possible to produce a structurally perfect product…

    Mathematical abstractions are characterized by the fact that they have all particulars included. … exclusively in mathematics does deduction, if done correctly, work absolutely… Not so in abstracting from physical objects. Here we proceed by forgetting, our deductions work only relatively, and must be revised continuously whenever new particulars are discovered.

    In the rough, a symbol is a sign that stands for something… Before a noise, etc., may become a symbol, something must exist for the symbol to symbolize.

    Since Einstein and the newer quantum mechanics, it has become increasingly evident that the only content of ‘knowing’ is of a structural character.

    The only link between the verbal and objective world is exclusively structural, necessitating the conclusion that the only content of all ‘knowledge’ is structural. Now structure can be considered as a complex of relations, and ultimately as multi-dimensional order. From this point of view, all language can be considered as names for unspeakable entities on the objective level, be it things or feelings, or as names of relations. In fact, even objects could be considered as relations between the sub-microscopic events and the human nervous system. If we enquire as to what the last relations represent, we find that an object represents an abstraction of a low order produced by our nervous system as the result of a sub-microscopic events acting as stimuli upon the nervous system.

    If we consider that all we deal with represents constantly changing sub-microscopic, interrelated processes which are not, and cannot be ‘identical with themselves’, the old dictum that ‘everything is identical with itself’ becomes in [today’s understanding of the universe] a principle invariably false to facts.… and [we] must abandon permanently the “is” of identity.

  • Peter's clarity of exposition is indeed remarkable, and makes it easier to see that, in order to support the conclusion that their proof of Ramsey's Theorem for pairs is 'finitistically reducible', Patey-Yokoyama must assume:

    (i) that ZFC is consistent, and therefore has a Tarskian interpretation in which the 'truth' of a ZFC formula can be evidenced;

    (ii) that their result must be capable of an evidence-based Tarskian interpretation over the finite structure of the natural numbers.

    As to (i), Peter has already pointed out in his final sentence that there are (serious?) reservations to accepting that the ZF axiom of infinity can have any evidence-based interpretation.

    As to (ii), Ramsey's Theorem is a ZFC formula of the form (Ex)F(x) (whose proof must appeal to an axiom of choice).

    Now in ZF (as in any first-order theory that appeals to the standard first-order logic FOL) the formula (Ex)F(x) is merely an abbreviation for the formula ~(Ax)~(F(x).

    So, under any consistent 'finitistically reducible' interpretation of such a formula, there must be a unique, unequivocal, evidence-based Tarskian interpretation of (Ax)F(x) over the domain of the natural numbers.

    Now, if we are to avoid intuitionistic objections to the admitting of `unspecified' natural numbers in the definition of quantification under any evidence-based Tarskian interpretation of a formal system of arithmetic, we are faced with the ambiguity where:

    (a) Is the (Ax)F(x)] to be interpreted constructively as:

    For any natural number n, there is an algorithm T(n) (say, a deterministic Turing machine) which evidences that {F(1), F(2), … F(n)} are all true; or,

    (b) is the formula (Ax)F(x) to be interpreted finitarily as:

    There is a single algorithm T (say, a deterministic Turing machine) which evidences that, for any natural number n, F(n) is true, i.e., each of {F(1), F(2), …} is true?

    As Peter has pointed out in his analysis of Ramsey's Theorem RT_2^2 for pairs, the proof of the Theorem necessitates that:

    "I have to 'see' the whole infinite set. I could look at a lot of these numbers and make a guess – but if the guess turns out to be wrong then it means I made a mistake at all the later steps of the process too; everything falls apart. This is not a finitistic proof – according to some logicians, you should be worried that it might somehow be wrong."

    In other words, Patey-Yokoyama's conclusion (that their new proof is 'finitistically reducible') would only hold if they have established (b) somewhere in their proof; but a cursory reading of their paper does not suggest this to be the case.

    The significance of the distinction between (a) and (b) is formally detailed in the following paper that is due to appear in the December 2016 issue of 'Cognitive Systems Research':

    'The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas' Goedelian Thesis'



  • Many thanks to Peter for the awesome explanation. And, in connection with the issue Natalie raised and several others followed, the book:

    Feng Ye, Strict Finitism and the Logic of Mathematical Applications, Springer, 2011

    may be of interest.

    Here is the Springer introduction to the book (http://www.springer.com/us/book/9789400713468):

    This book intends to show that radical naturalism (or physicalism), nominalism and strict finitism account for the applications of classical mathematics in current scientific theories. The applied mathematical theories developed in the book include the basics of calculus, metric space theory, complex analysis, Lebesgue integration, Hilbert spaces, and semi-Riemann geometry (sufficient for the applications in classical quantum mechanics and general relativity). The fact that so much applied mathematics can be developed within such a weak, strictly finitistic system, is surprising in itself. It also shows that the applications of those classical theories to the finite physical world can be translated into the applications of strict finitism, which demonstrates the applicability of those classical theories without assuming the literal truth of those theories or the reality of infinity.

  • This subject is interesting if for no other reason than mathematical language developed out of a form of mysticism(platonism), and has retained some of those characteristics across over two thousand years.

    The intuitionists failed to create the reformation that would have made these subjects quite simple to understand.

    We have reformed physics from the Aristotelian "first mover" category of language. We have reformed morality to be expressed in economic language. but we have not reformed the language of mathematics, thereby reducing mathematical platonism to operational (existential and computable) axioms.

    If we do so, the discipline of mathematics has evolved as much by eliminating axioms of correspondence (or asserting axioms of non-correspondence), then leaving mathematicians to attempt to find methods of deduction with fewer and fewer properties to work with.

    From this perspective, mathematical reasoning has been an exercise in the exploration of deduction of deterministic systems of correspondence (pairs) using decreasing information because of decreasing axioms (rules) of correspondence.

    Or more simply said, mathematics evolved from the pairing of stones while counting sheep, then giving names to the stones, then positional names to larger quantities of stones. then to sets of stones. Then to ratios of stones. Then space, then time. Then deductions from stones, space, and time.

    So we have merely increased the properties (axioms) and removed the properties (axioms) of correspondence with reality and explored how to perform deductions with more or fewer properties (axioms) of correspondence.

    That we have not reformed the philosophy and language mathematics as we have in other fields is due to the fact that the intuitionists in all fields (Bridgman/physics, Mises in economics, Brouwer/mathematics, and various authors in Psychology) possessed different incentives and different threats to their credibility. Interestingly, psychology has reformed through the use of 'operationism', the physical sciences have reformed in large part, economics has not reformed, and mathematics has not. And the answer why is interesting: psychology was under threat of classification as a pseudoscience threatening incomes. Economists currently fight that battle, but the political utility of models plus the extensive time that passes (a generation or more) before policy makes itself visible, provides convenient escape from criticism. Mathematics has not in large part because unlike psychology, economics, or the physical sciences **it's external consequences are irrelevant**. Meaning that there is no pressure to reform, because mathematicians outside of the sciences have no feedback mechanism to force them to.

    There is nothing magical or mysterious about mathematics. What's interesting is how we add and subtract properties of reality in order to created models that retain determinism and allow us to perform deductions with decreasing information, about scale independent patterns.

    The only reason it's even vaguely interesting is because the human mind is so easily overwhelmed with but a few short term memory facts, and a few axis of causality. Almost all mathematical operations (transformations) are determined by the capacity of our minds, and greater minds might not need symbols and operators of smimilar simplicity in order to see deductions or relations of far greater complexity.

    So, mathematics is trivial really. But if you talk about it in magic words, it's going to sound magical. When really, it's just a matter of not being able to sense relations with our mind, the same way we cannot sense distant objects in the heavens with our eyes, the same way we cannot hear distant sounds with our ears or feel subtle vibrations with our feet.

    We use tools of all forms to increase the power of our senses, and mathematics consists of states and operations that humans can operate and sense in complex deterministic models what we cannot sense and perceive without states and operations to assist us.

    The moment you add or remove an axiom (command, or fact) the results are deterministic. The interesting part is only that we are developing the art of deduction for increasingly informationally sparse relations.

    Curt Doolittle
    The Propertarian Institute
    Kiev, Ukraine

  • I only got a B. A. in math, so I am an ignoramus. I recall when first being introduced to limits and convergence, that the concept of an "infinite" series "converging" to a certain number, I found the language was misleading. All it really meant was that no matter how small a neighborhood you specify, if n is large enough, you get inside that neighborhood. You don't really need the concept of infinity to talk about convergence. My point is that sometimes "infinity" is used unnecessarily in mathematics. There is really no such thing as an "infinite" series.

  • I love this sort of article because here at last we have top mathematicians and logicians with tenure and careers admitting what I have been saying since I was sixteen. Our understanding of mathematical infinity is inconsistent, not an 'agreed fact' and very likely badly broken. At last the truth appears as it has recently in physics with the admission that the standard model must be wrong.
    Now for fun try we can try counting the 'uncountable' set of the Real numbers.

    1. Extract the set of the natural numbers which we accept is i1 countably infinite.

    2. Recognize that the remaining set of Real numbers consists of i1 sets of numbers each of which can be represented as a member of the first set and a potentially infinite sequence of digits after a decimal point.

    3. Recognize that each and every one of these sequences is in fact the representation of a natural number, i.e. a member of the first set and we can say for certain how many there are, i1 infinity.

    4. The number of real numbers is therefore i1^2+i1

    I leave the formal proof to someone with a larger margin to write in 🙂

  • This is an interesting exploration with computer science application.

    But as for the rest, I react much as Curt Doolittle, except that I don't think physics is axiomatizable (c.f. quantization methods), in that this remains a mystic platonist divide between mathematical games and real world applications. Reality pertains to robust (observable) characteristics of nature, so math are excluded by fiat.

    @Natalie: "If actual infinity is indispensable for explaining the universe, this would seem to imbue infinity with some “reality,” despite infinite objects not seeming to directly manifest themselves in the physical world."

    That Hilbert spaces of quantum physics are infinite dimensional is because they have no physical units, they are vector spaces on states that are expressed numerically, often as function spaces. In other words we shouldn't confuse mathematical infinities with physical observable characteristics that can be properties of the real world. C.f. how an eternal inflationary cosmology can be infinite in the measurable dimensions of time and space, and we can probe such a system from a finite local universe.

    @Julian West: Planck volumes can't be geometrical features of spacetime if you want to preserve special relativity. Some supernova photon timing and polarization results implies relativity holds, as does string theory.

    @Matthew: "At last the truth appears as it has recently in physics with the admission that the standard model must be wrong."

    Even if you don't specify which "standard model" there is no such recent admission in the usual suspects (cosmology? particle physics?) that I know about.

    What is usually noted is that they are incomplete, but that is different from being wrong vs our observations to date. C.f. newtonian gravity vs general relativity, we still use the former for rocket science but not for everyday location (GPS location).

  • Have a background in indian traditional math – Ganita – in addition to being a computer scientist / practical programmer. Find the deep assumptions of western metaphysics in western math troubling and flawed – esp. the understanding of infinities and the arbitrary "mechanics" of proof. This is a wonderful example –

    Too many flawed assumptions –
    Wrt – Peter's explanation of the proof
    "After an infinitely long time, I can go back " – How ?

    For more on this perspective ( assuming this is not moderated )- would recommend – http://ckraju.net/papers/Eternity-and-infinity.pdf

  • @Torbjörn. Agreed, I am being deliberately provocative. I was referring to both particle physics and cosmology where the standard models are now admitted to be incomplete which I am loosely interpreting as wrong. The incompleteness is of two kinds. One there are phenomena such as black holes, big bang, the expansion of the universe, cold fusion which they don't adequately model or explain. More importantly they are philosophically incomplete because they cannot tell us why. Why are there electro-weak and strong forces? Why did the bang happen at all. Why is gravity able to act on objects travelling apart at the speed of light when no communication is possible?
    That last, physics might have a chance at but the deeper 'why' questions it will likely never answer and therefore will not deliver the ultimate truth that many 20th century Western minds have hoped it would. This article implies that the same is true in mathematics and that is going to be an ever harder pill for our civilization to swallow.

  • To be honest, the 'enumeration' (for lack of better word) proof for RT^2_2 is quite confusing. I remember proving this with probabilistic method, which I feel is more elegant (Don't ask me for proof now, I'm still looking for it. It's also possible that I showed something totally different). In any case, how would you then reconcile that with the reducibly finite notion?

Comments are closed.