Hannes Hummel for Quanta Magazine

Mathematics is being rebuilt on top of new foundations.

On a recent train trip from Lyon to Paris, Vladimir Voevodsky sat next to Steve Awodey and tried to convince him to change the way he does mathematics.

Voevodsky, 48, is a permanent faculty member at the Institute for Advanced Study (IAS) in Princeton, N.J. He was born in Moscow but speaks nearly flawless English, and he has the confident bearing of someone who has no need to prove himself to anyone. In 2002 he won the Fields Medal, which is often considered the most prestigious award in mathematics.

Now, as their train approached the city, Voevodsky pulled out his laptop and opened a program called Coq, a proof assistant that provides mathematicians with an environment in which to write mathematical arguments. Awodey, a mathematician and logician at Carnegie Mellon University in Pittsburgh, Pa., followed along as Voevodsky wrote a definition of a mathematical object using a new formalism he had created, called univalent foundations. It took Voevodsky 15 minutes to write the definition.

“I was trying to convince [Awodey] to do [his mathematics in Coq],” Voevodsky explained during a lecture this past fall. “I was trying to convince him that it’s easy to do.”

The idea of doing mathematics in a program like Coq has a long history. The appeal is simple: Rather than relying on fallible human beings to check proofs, you can turn the job over to computers, which can tell whether a proof is correct with complete certainty. Despite this advantage, computer proof assistants haven’t been widely adopted in mainstream mathematics. This is partly because translating everyday math into terms a computer can understand is cumbersome and, in the eyes of many mathematicians, not worth the effort.

For nearly a decade, Voevodsky has been advocating the virtues of computer proof assistants and developing univalent foundations in order to bring the languages of mathematics and computer programming closer together. As he sees it, the move to computer formalization is necessary because some branches of mathematics have become too abstract to be reliably checked by people.

“The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes,” Voevodsky said. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error.

This is something Voevodsky has learned through personal experience. In 1999 he discovered an error in a paper he had written seven years earlier. Voevodsky eventually found a way to salvage the result, but in an article last summer in the IAS newsletter, he wrote that the experience scared him. He began to worry that unless he formalized his work on the computer, he wouldn’t have complete confidence that it was correct.

But taking that step required him to rethink the very basics of mathematics. The accepted foundation of mathematics is set theory. Like any foundational system, set theory provides a collection of basic concepts and rules, which can be used to construct the rest of mathematics. Set theory has sufficed as a foundation for more than a century, but it can’t readily be translated into a form that computers can use to check proofs. So with his decision to start formalizing mathematics on the computer, Voevodsky set in motion a process of discovery that ultimately led to something far more ambitious: a recasting of the underpinnings of mathematics.

Set Theory and a Paradox

Set theory grew out of an impulse to put mathematics on an entirely rigorous footing — a logical basis even more secure than numbers themselves. Set theory begins with the set containing nothing — the null set — which is used to define the number zero. The number 1 can then be built by defining a new set with one element — the null set. The number 2 is the set that contains two elements — the null set (0) and the set that contains the null set (1). In this way, each whole number can be defined as the set of sets that came before it.

Olena Shmahalo/Quanta Magazine

Set theory constructs all of mathematics by starting with literally nothing — the null set — which is defined to be zero. The set that contains a single element — the null set — becomes the number 1, the set that contains two elements — the null set and the set containing the null set — becomes the number 2, and so on.

Once the whole numbers are in place, fractions can be defined as pairs of whole numbers, decimals can be defined as sequences of digits, functions in the plane can be defined as sets of ordered pairs, and so on. “You end up with complicated structures, which are a set of things, which are a set of things, which are a set of things, all the way down to the metal, to the empty set at the bottom,” said Michael Shulman, a mathematician at the University of San Diego.

Set theory as a foundation includes these basic objects — sets — and logical rules for manipulating those sets, from which the theorems in mathematics are derived. An advantage of set theory as a foundational system is that it is very economical — every object mathematicians could possibly want to use is ultimately built from the null set.

On the other hand, it can be tedious to encode complicated mathematical objects as elaborate hierarchies of sets. This limitation becomes problematic when mathematicians want to think about objects that are equivalent or isomorphic in some sense, if not necessarily equal in all respects. For example, the fraction ½ and the decimal 0.5 represent the same real number but are encoded very differently in terms of sets.

“You have to build up a specific object and you’re stuck with it,” Awodey said. “If you want to work with a different but isomorphic object, you’d have to build that up.”

But set theory isn’t the only way to do mathematics. The proof assistant programs Coq and Agda, for example, are based on a different formal system called type theory.

Type theory has its origins in an attempt to fix a critical flaw in early versions of set theory, which was identified by the philosopher and logician Bertrand Russell in 1901. Russell noted that some sets contain themselves as a member. For example, consider the set of all things that are not spaceships. This set — the set of non-spaceships — is itself not a spaceship, so it is a member of itself.

Russell defined a new set: the set of all sets that do not contain themselves. He asked whether that set contains itself, and he showed that answering that question produces a paradox: If the set does contain itself, then it doesn’t contain itself (because the only objects in the set are sets that don’t contain themselves). But if it doesn’t contain itself, it does contain itself (because the set contains all the sets that don’t contain themselves).

Russell created type theory as a way out of this paradox. In place of set theory, Russell’s system used more carefully defined objects called types. Russell’s type theory begins with a universe of objects, just like set theory, and those objects can be collected in a “type” called a SET. Within type theory, the type SET is defined so that it is only allowed to collect objects that aren’t collections of other things. If a collection does contain other collections, it is no longer allowed to be a SET, but is instead something that can be thought of as a MEGASET — a new kind of type defined specifically as a collection of objects which themselves are collections of objects.

From here, the whole system arises in an orderly fashion. One can imagine, say, a type called a SUPERMEGASET that collects only objects that are MEGASETS. Within this rigid framework, it becomes illegal, so to speak, to even ask the paradox-inducing question, “Does the set of all sets that do not contain themselves contain itself?” In type theory, SETS only contain objects that are not collections of other objects.

An important distinction between set theory and type theory lies in the way theorems are treated. In set theory, a theorem is not itself a set — it’s a statement about sets. By contrast, in some versions of type theory, theorems and SETS are on equal footing. They are “types” — a new kind of mathematical object. A theorem is the type whose elements are all the different ways the theorem can be proved. So, for example, there is a single type that collects all the proofs to the Pythagorean theorem.

To illustrate this difference between set theory and type theory, consider two sets: Set A contains two apples and Set B contains two oranges. A mathematician might consider these sets equivalent, or isomorphic, because they have the same number of objects. The way to show formally that these two sets are equivalent is to pair objects from the first set with objects from the second. If they pair evenly, with no objects left over on either side, they’re equivalent.

When you do this pairing, you quickly see that there are two ways to show the equivalence: Apple 1 can be paired with Orange 1 and Apple 2 with Orange 2, or Apple 1 can be paired with Orange 2 and Apple 2 with Orange 1. Another way to say this is to state that the two sets are isomorphic to each other in two ways.

In a traditional set theory proof of the theorem Set A ≅ Set B (where the symbol ≅ means “is isomorphic to”), mathematicians are concerned only with whether one such pairing exists. In type theory, the theorem Set A ≅ Set B can be interpreted as a collection, consisting of all the different ways of demonstrating the isomorphism (which in this case is two). There are often good reasons in mathematics to keep track of the various ways in which two objects (like these two sets) are equivalent, and type theory does this automatically by bundling equivalences together into a single type.

This is especially useful in topology, a branch of mathematics that studies the intrinsic properties of spaces, like a circle or the surface of a doughnut. Studying spaces would be impractical if topologists had to think separately about all possible variations of spaces with the same intrinsic properties. (For example, circles can come in any size, yet every circle shares the same basic qualities.) A solution is to reduce the number of distinct spaces by considering some of them to be equivalent. One way topologists do this is with the notion of homotopy, which provides a useful definition of equivalence: Spaces are homotopy equivalent if, roughly speaking, one can be deformed into the other by shrinking or thickening regions, without tearing.

The point and the line are homotopy equivalent, which is another way of saying they’re of the same homotopy type. The letter P is of the same homotopy type as the letter O (the tail of the P can be collapsed to a point on the boundary of the letter’s upper circle), and both P and O are of the same homotopy type as the other letters of the alphabet that contain one hole — A, D, Q and R.

Emily Fuhrman for Quanta Magazine

Homotopy types are used to classify the essential features of an object. The letters A, R and Q all have one hole, and so are the same homotopy type. The letters C, X and K are also of the same homotopy type, as they can all transform into a line.

Topologists use different methods for assessing the qualities of a space and determining its homotopy type. One way is to study the collection of paths between distinct points in the space, and type theory is well-suited to keeping track of these paths. For instance, a topologist might think of two points in a space as equivalent whenever there is a path connecting them. Then the collection of all paths between points x and y can itself be viewed as a single type, which represents all proofs of the theorem x = y.

Homotopy types can be constructed from paths between points, but an enterprising mathematician can also keep track of paths between paths, and paths between paths between paths, and so on. These paths between paths can be thought of as higher-order relationships between points in a space.

Voevodsky tried on and off for 20 years, starting as an undergraduate at Moscow State University in the mid-1980s, to formalize mathematics in a way that would make these higher-order relationships — paths of paths of paths — easy to work with. Like many others during this period, he tried to accomplish this within the framework of a formal system called category theory. And while he achieved limited success in using category theory to formalize particular regions of mathematics, there remained regions of mathematics that categories couldn’t reach.

Voevodsky returned to the problem of studying higher-order relationships with renewed interest in the years after he won the Fields Medal. In late 2005, he had something of an epiphany. As soon as he started thinking about higher-order relationships in terms of objects called infinity-groupoids, he said, “many things started to fall into place.”

Infinity-groupoids encode all the paths in a space, including paths of paths, and paths of paths of paths. They crop up in other frontiers of mathematical research as ways of encoding similar higher-order relationships, but they are unwieldy objects from the point of view of set theory. Because of this, they were thought to be useless for Voevodsky’s goal of formalizing mathematics.

Yet Voevodsky was able to create an interpretation of type theory in the language of infinity-groupoids, an advance that allows mathematicians to reason efficiently about infinity-groupoids without ever having to think of them in terms of sets. This advance ultimately led to the development of univalent foundations.

Voevodsky was excited by the potential of a formal system built on groupoids, but also daunted by the amount of technical work required to realize the idea. He was also concerned that any progress he made would be too complicated to be reliably verified through peer review, which Voevodsky said he was “losing faith in” at the time.

Toward a New Foundational System

With groupoids, Voevodsky had his object, which left him needing only a formal framework in which to organize them. In 2005 he found it in an unpublished paper called FOLDS, which introduced Voevodsky to a formal system that fit uncannily well with the kind of higher-order mathematics he wanted to practice.

In 1972 the Swedish logician Per Martin-Löf introduced his own version of type theory inspired by ideas from Automath, a formal language for checking proofs on the computer. Martin-Löf type theory (MLTT) was eagerly adopted by computer scientists, who have used it as the basis for proof-assistant programs.

In the mid-1990s, MLTT intersected with pure mathematics when Michael Makkai, a specialist in mathematical logic who retired from McGill University in 2010, realized it might be used to formalize categorical and higher-categorical mathematics. Voevodsky said that when he first read Makkai’s work, set forth in FOLDS, the experience was “almost like talking to myself, in a good sense.”

Andrea Kane/Institute for Advanced Study

Vladimir Voevodsky’s univalent foundations program aims to rebuild mathematics in a way that will allow computers to check all mathematical proofs.

Voevodsky followed Makkai’s path but used groupoids instead of categories. This allowed him to create deep connections between homotopy theory and type theory.

“This is one of the most magical things, that somehow it happened that these programmers really wanted to formalize [type theory],” Shulman said, “and it turns out they ended up formalizing homotopy theory.”

Voevodsky agrees that the connection is magical, though he sees the significance a little differently. To him, the real potential of type theory informed by homotopy theory is as a new foundation for mathematics that’s uniquely well-suited both to computerized verification and to studying higher-order relationships.

Voevodsky first perceived this connection when he read Makkai’s paper, but it took him another four years to make it mathematically precise. From 2005 to 2009 Voevodsky developed several pieces of machinery that allow mathematicians to work with sets in MLTT “in a consistent and convenient way for the first time,” he said. These include a new axiom, known as the univalence axiom, and a complete interpretation of MLTT in the language of simplicial sets, which (in addition to groupoids) are another way of representing homotopy types.

This consistency and convenience reflects something deeper about the program, said Daniel Grayson, an emeritus professor of mathematics at the University of Illinois at Urbana-Champaign. The strength of univalent foundations lies in the fact that it taps into a previously hidden structure in mathematics.

“What’s appealing and different about [univalent foundations], especially if you start viewing [it] as replacing set theory,” he said, “is that it appears that ideas from topology come into the very foundation of mathematics.”

From Idea to Action

Building a new foundation for mathematics is one thing. Getting people to use it is another. By late 2009 Voevodsky had worked out the details of univalent foundations and felt ready to begin sharing his ideas. He understood people were likely to be skeptical. “It’s a big thing to say I have something which should probably replace set theory,” he said.

Voevodsky first discussed univalent foundations publicly in lectures at Carnegie Mellon in early 2010 and at the Oberwolfach Research Institute for Mathematics in Germany in 2011. At the Carnegie Mellon talks he met Steve Awodey, who had been doing research with his graduate students Michael Warren and Peter Lumsdaine on homotopy type theory. Soon after, Voevodsky decided to bring researchers together for a period of intensive collaboration in order to jump-start the field’s development.

Along with Thierry Coquand, a computer scientist at the University of Gothenburg in Sweden, Voevodsky and Awodey organized a special research year to take place at IAS during the 2012-2013 academic year. More than thirty computer scientists, logicians and mathematicians came from around the world to participate. Voevodsky said the ideas they discussed were so strange that at the outset, “there wasn’t a single person there who was entirely comfortable with it.”

The ideas may have been slightly alien, but they were also exciting. Shulman deferred the start of a new job in order to take part in the project. “I think a lot of us felt we were on the cusp of something big, something really important,” he said, “and it was worth making some sacrifices to be involved with the genesis of it.”

Following the special research year, activity split in a few different directions. One group of researchers, which includes Shulman and is referred to as the HoTT community (for homotopy type theory), set off to explore the possibilities for new discoveries within the framework they’d developed. Another group, which identifies as UniMath and includes Voevodsky, began rewriting mathematics in the language of univalent foundations. Their goal is to create a library of basic mathematical elements — lemmas, proofs, propositions — that mathematicians can use to formalize their own work in univalent foundations.

As the HoTT and UniMath communities have grown, the ideas that underlie them have become more visible among mathematicians, logicians and computer scientists. Henry Towsner, a logician at the University of Pennsylvania, said that there seems to be at least one presentation on homotopy type theory at every conference he attends these days, and that the more he learns about the approach, the more it makes sense. “It was this buzzword,” he said. “It took me awhile to understand what they were actually doing and why it was interesting and a good idea, not a gimmicky thing.”

A lot of the attention univalent foundations has received is owing to Voevodsky’s standing as one of the greatest mathematicians of his generation. Michael Harris, a mathematician at Columbia University, includes a long discussion of univalent foundations in his new book, Mathematics Without Apologies. He is impressed by the mathematics that surround the univalence model, but he is more skeptical of Voevodsky’s larger vision of a world in which all mathematicians formalize their work in univalent foundations and check it on the computer.

“The drive to mechanize proof and proof verification doesn’t strongly motivate most mathematicians as far as I can tell,” he said. “I can understand why computer scientists and logicians would be excited, but I think mathematicians are looking for something else.”

Voevodsky is aware that a new foundation for mathematics is a tough sell, and he admits that “at the moment there is really more hype and noise than the field is ready for.” He is currently using the language of univalent foundations to formalize the relationship between MLTT and homotopy theory, which he considers a necessary next step in the development of the field. Voevodsky also has plans to formalize his proof of the Milnor conjecture, the achievement for which he earned a Fields Medal. He hopes that doing so might act as “a milestone which can be used to create motivation in the field.”

Voevodsky would eventually like to use univalent foundations to study aspects of mathematics that have been inaccessible within the framework of set theory. But for now he’s approaching the development of univalent foundations cautiously. Set theory has undergirded mathematics for more than a century, and if univalent foundations is to have similar longevity, Voevodsky knows it’s important to get things right at the start.

This article was reprinted on Wired.com.

View Reader Comments (29)

Leave a Comment

Reader CommentsLeave a Comment

  • One of the advantages of his proof method is one has to have very clear and very specific steps to program the computer proof. Logical steps that can be more easily understood by others.

  • Thank you for this fascinating article. One minor quibble: the caption under the “building up numbers” diagram includes the text: “Set theory constructs all of mathematics by starting with literally nothing — the null set — which is defined to be zero.” But the null set is not “literally nothing”. It is a set with no elements, which is not nothing. 🙂

  • …a computer-aided quest to eliminate human error. To succeed, he has to…
    anyone else see the irony?

  • Since homotopy theory is part and parcel of the care and feeding of superstring
    theory, I wonder how might Voevodsky’s new bestiary tools might assist in the
    exploration of that mostly untamed and ultra-tangled zoo.

  • My basic problem is to define what is mathematics in the first place. If elements are of the same static type such as, all apples, all oranges etc., one can say that the operation addition can be performed on them and multiplication etc., are shorthand version of the operation. if the elements are dynamic type such as, liquid, gases, speed etc, then the integration operation can be performed with errors. But if we mix more than one homogeneous sets, then we create algebra such as, four apples and 8 bananas and so on. How do we change the way we teach mathematics to children is not addressed by any one. It is not if you can show and tell and then expect the students to have the association, rather how do we introduce concepts to children and associate their linguistic vocabulary to mathematical symbols. Thus, common noun will be the superset, proper noun will be the set and so on. Is there any verifiable study in this regard.
    This article is eye opening and I enjoyed it. Thanks

  • Very fascinating. Formal proof checkers, and formal mathematics in general, is often dismissed as for “weirdos” even within very abstract math circles. The difficulty of roping in higher algebraic theories is where I’ll bet this program stutters. However, it’s great that homotopy is the vector of attack. Homotopy theory has seen incredible advances(and generated many of them) in the past 15 years. A lot of brave new math, is from a homotopy perspective, so wrangling that monster seems powerful.

  • I may be out on a limb, but putting the whole logic in AI useable terms, may this not create a measure of employment among future students of mathematics ?

  • I believe this will lead us to understand and check rigorous proofs of mathematical statements and a world of complete certainty. May be from this time we will get less number of incorrect proofs of unsolved conjectures. In a true sense we should appreciate this project.

  • Instead of starting from the null set, we should be starting from “set” itself, which is to say, the power to create and appreciate the sense of categorical ordering. That is the only fundamental set which can be concretely real. The null set, by contrast, is an abstraction which relies on our sense of “set” as well as our sense of negation. It’s not fundamental but derived from representation.

    There can be no ‘null’ set except as a concept to symbolize a category which has no contents. Such a category begins with consciousness reflecting upon itself, and inverting that reflection to become an absence. It’s a mistake to found any universal system on an empty category since there would be nothing there to do any categorizing. Sets are mental groupings of things, ideas, or experiences…the absence of everything cannot be enveloped by a set, except as an exercise in abstraction (which is perfectly valid, but not real).

  • Very interesting ideas about set theory. Since mathematical set theory is the formal basis for relational database design theory, could this new perspective lead to a definition for an enhanced data structure paradigm to include more diverse data structures and types?

  • Over my head but glad to see methods, or potential methods, to model the growing complexities of science.

  • Peter: Yes! I’m glad I’m not the only one to see this!
    Tien: You are simply re-affirming this.

    Nothing is the absence of all sets. The Null Set is admittedly the ‘first’ _usable_ form of Nothing, but it is certainly _not_ Nothing. The Null Set is not the apeiron. Even George Spencer-Brown’s “Laws of Form” starts with the ‘mark’, and then goes _back_ to explain Nothing…

  • @tien & Craig Weinberg I think we need to be clear what we are talking about. Let’s start in the reverse order. Mathematics is an attempt to model reality but its formalism is a human construct. Where the need comes from is the human desire to understand the universe around us but we need one vital component, classification. We become confused if something defies classification and so Set Theory is born. But this is about reality, not mathematics and in reality Nothing does not exist. Even if our universe is in the expanse of of infinite void Nothing does does not exist because we are in the middle of it. One of my thoughts has always been the absolute Nothing would have to devoid of any property that would describe it. Ancient Sumerians got on quite well without a ‘zero’.
    Having a ‘null’ set is part of a mathematical reference because one has to start from ‘somewhere’ and of course any calculating machine must embody that.

  • Since the conversation seems to have drifted into “null set” territory. It’s been my opinion that the “null set” requires context; and the radical idea that two null sets are not equivalent unless somehow they are identified by correspondences of their properties. A “null set” is certainly not “nothing”.
    In any case: any present, correctly proved, theorem would have to remain true under any other system that allowed the basis, say set theory, to constructed correctly. We have known for a long time that the selection of a axiom set as a basis is just that, a selection. There are (almost) always different selections that can be used. Usually based on utility. If we construct set theory from type theory then we are just running set theory on a different “machine”. One could hope that the new basis would also open up new realms outside of set theory as well. For instance I have doubts about the status of hyperintegers in terms of set theory; perhaps type theory could clarify the picture.

  • At the turn of the 20th century David Hilbert proclaimed “we imagine points, we imagine lines, we imagine planes”. Prior to this time imagined mental constructs were not considered orthodox mathematics. For example, DesCarte held that imaginary numbers were impossible because imaginary numbers had no practical geometric construct.
    Albert Einstein commented that without the acceptance of Hilberts proclamation as orthodox mathematics he could not have written his relativity theories as the “c” in his theory of General Relativity is equivalent to the square root of minus one, or imaginary numbers having no geometric construct supporting the existence of the square root of minus one.
    This is why today Set theory is considered the foundational mathematical concept. However, as Einstein explains in an article in the 1968 edition of the Encyclopedia Brittanica, all mathematics can be traced back to one foundational principle which is the straight line betwee two points in space. This is to say that the language of Science, mathematics, is based on a foundational concept that does not agree with the known scientific concept that the geometry of space is curved. Mathematics in my opinion will be unable to generate an equation that describes the invisible dynamic structural order that underlies everything until as geometric construction system is uncovered that structurally models the structural order underlying everything. The numerical properties of such a structural geometric curved space coordinate system would renew the bond that was broken between mathematics and practical structural geometry at the turn of the 20th century.

  • To the author:

    I don’t intend to sound too pedantic but in major fields of mathematics there is a precise distinction between the semantic/metatheoretic terms “empty set” and “null set” whereby the empty set is nowadays the standard name for the set-theoretic notion of “nothingness” which is given by an axiom within, e.g., Zermelo-Fraenkel or von Neumann-Bernays-Gödel systems (see https://en.wikipedia.org/wiki/Set_theory#Axiomatic_set_theory for an incomplete overview, there’s more in the page’s footer).

    “Null set” is mostly the standard name for negligible sets, most notably for those of zero measure (measure theory, descriptive set theory, cf. https://en.wikipedia.org/wiki/Null_set).

    To a mathematician both things are different, even, or perhaps mostly, to a student.

    As for the construction of the natural numbers you are referring to, it is just one under a host of viable constructions. That is due to von Neumann. We can perfectly well give a logically equivalent synthetic definition, e.g., via the Peano’s axiom system.

    All in all, with respect to the preceding comments, we must start somewhere with a synthetic definition (i.e., axiomatic assumption) before we can construct anything. It may be set theory, first-order logic, category theory, type theory, or the univalent theory, once it’s complete. The rest of mathematics that is built upon some of these foundational theories, put it bluntly, is a matter of translations between these foundations.

    The article is otherwise great, thanks, please keep it up, especially with regard to Voevodsky’s, Awodey’s and the entire team’s work. They are doing astonishing work which is an inspiration to lots of people.

  • From this article: “For nearly a decade, Voevodsky a permanent faculty member at the Institute for Advanced Study (IAS) in Princeton, has been advocating the virtues of computer proof assistants and developing univalent foundations in order to bring the languages of mathematics and computer programming closer together. As he sees it, the move to computer formalization is necessary because some branches of mathematics have become too abstract to be reliably checked by people.”

    “The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes,” Voevodsky said.

    Yes, however a relation which I recently studied, an analytic relation of the non-linear mathematical pendulum to the harmonic and anharmonic quantum-mechanic oscillator, shows that well defined solution of non-linear partial differential equation exist, which cannot be computed on any real existing computing system. In the low viscous pendulum case, the question if a pendulum stops oscillating in a finite time or not cannot be answered by any real computer system, since the number of symbols necessary to compute the solution exceeds finite computational resources. But this has to wait until there is the chance to spend more time on mathematical research and write up the handwritten notes.

    However, this post http://discontinuous-flow.blogspot.com/2012/09/turing-machines-and-mathematical.html contains an argument which shows, that a physical space is not a Turing machine; which indicates that physical reality requires symbols which cannot be computed. How can we know if these symbols are a collection of other things or not? I think this is not possible from a physical point of view.

    He furthermore wrote “But taking that step required him to rethink the very basics of mathematics. The accepted foundation of mathematics is set theory. Like any foundational system, set theory provides a collection of basic concepts and rules, which can be used to construct the rest of mathematics. Set theory has sufficed as a foundation for more than a century, but it can’t readily be translated into a form that computers can use to check proofs. So with his decision to start formalizing mathematics on the computer, Voevodsky set in motion a process of discovery that ultimately led to something far more ambitious: a recasting of the underpinnings of mathematics.”

    Set theory or univalent foundations is based on the desire to put the mathematical sciences on a formal basis. However, the mathematical sciences grew out of the need to understand and organize the real world we live in. Mathematical theories spread around the world, because they where useful for societies. Geometry for example for the construction of viaducts and the number system for example for accounting.
    Albert Einstein while working at the IAS wrote: “The elements of physical reality cannot be determined by a priori philosophical considerations, but must be found by an appeal to results of experiments and measurements”.
    Maybe the formalization of the mathematical foundations it just like the formalization of spoken languages through grammar rules, a kind of mathematical grammar? Which must be changed, if the language develops new words and dialects, which are useful for the description of our modern world.

  • The empty set is absolutely something. When it is said that we are starting with literally nothing, it means “starting with the notion of nothingness”, which is something. We don’t start with nothing, we start with “nothing”: note the quotes. This is an axiom, we must admit it if we want to go on.

  • Nice piece, although the distinction between “proof checking” and “proof construction” deserves to be treated in a somehow more detailed way.

  • The Null Set confused me in high school but I guess one has to have “nothing” in order to “start”

  • VV is a powerful mathematician. He is devoting himself to a particular problem. It may not work out, but what is wonderful is that he can focus his energies for many years on this problem. You have a diverse community of mathematicians and computer scientists, with different goals and methods. The most powerful can pursue an idiosyncratic project without being considered “crackpots.” This is terrific! They may well recruit other scientists to their agenda. That’s fine too. But no one has to follow, and if the work proves successful the profession and the field will benefit. The discussions about whether it is worth doing or will succeed misses the point. VV is putting his lifetime on the line. That’s what matters, while the critics are not putting up anything comparable.

  • I’m confused about the Type Theory way out of the paradox mentioned. Couldn’t one just refer to the MEGASET of MEGASETs that don’t contain themselves and have the paradox again?

  • Non-mathematician here, so forgive me if this is dumb, but it sounds like this might be useful for Feynman diagrams and sum-over-all-paths calculations.

  • Tien, et al.

    A null set does not contain ”nothing”. A null set contains nothing. A null set does not contain anything. There is nothing that is nothing. There is no thing that you can point to that is nothing. That’s a non-affirming negative. A null set positively exists, conceptually, although similarly there is nothing you can point to that is the null set.

  • @Joel Stave

    "I’m confused about the Type Theory way out of the paradox mentioned. Couldn’t one just refer to the MEGASET of MEGASETs that don’t contain themselves and have the paradox again?"

    In type theory, A SET can't contain itself, since only a MEGASET can contain a SET. A MEGASET can't contain itself either, since only a SUPERMEGASET can contain a MEGASET – so we avoid bumping into the paradox at that level too. And a SUPERMEGASET can't contain itself, since only a… You get the idea, it works the same way all the way up, with potentially infinite levels (or "universes" as they say in type theory).

Comments are closed.