Katherine Taylor for Quanta Magazine

Leslie Valiant is a computer scientist at Harvard University.


To the computer scientist Leslie Valiant, “machine learning” is redundant. In his opinion, a toddler fumbling with a rubber ball and a deep-learning network classifying cat photos are both learning; calling the latter system a “machine” is a distinction without a difference.

Valiant, a computer scientist at Harvard University, is hardly the only scientist to assume a fundamental equivalence between the capabilities of brains and computers. But he was one of the first to formalize what that relationship might look like in practice: In 1984, his “probably approximately correct” (PAC) model mathematically defined the conditions under which a mechanistic system could be said to “learn” information. Valiant won the A.M. Turing Award — often called the Nobel Prize of computing — for this contribution, which helped spawn the field of computational learning theory.

Valiant’s conceptual leaps didn’t stop there. In a 2013 book, also entitled “Probably Approximately Correct,” Valiant generalized his PAC learning framework to encompass biological evolution as well.

He broadened the concept of an algorithm into an “ecorithm,” which is a learning algorithm that “runs” on any system capable of interacting with its physical environment. Algorithms apply to computational systems, but ecorithms can apply to biological organisms or entire species. The concept draws a computational equivalence between the way that individuals learn and the way that entire ecosystems evolve. In both cases, ecorithms describe adaptive behavior in a mechanistic way.

Valiant’s self-stated goal is to find “mathematical definitions of learning and evolution which can address all ways in which information can get into systems.” If successful, the resulting “theory of everything” — a phrase Valiant himself uses, only half-jokingly — would literally fuse life science and computer science together. Furthermore, our intuitive definitions of “learning” and “intelligence” would expand to include not only non-organisms, but non-individuals as well. The “wisdom of crowds” would no longer be a mere figure of speech.

Quanta Magazine spoke with Valiant about his efforts to dissolve the distinctions between biology, computation, evolution and learning. An edited and condensed version of the interview follows.

QUANTA MAGAZINE: How did you come up with the idea of “probably approximately correct” learning?

LESLIE VALIANT: I belonged to the theoretical computer science community, specializing in computational complexity theory, but I was also interested in artificial intelligence. My first question was: Which aspect of artificial intelligence could be made into a quantitative theory? I quickly settled on the idea that it must be learning.

At the time I started working on it [in the 1980s], people were already investigating machine learning, but there was no consensus on what kind of thing “learning” was. In fact, learning was regarded with total suspicion in the theoretical computer science community as something which would never have a chance of being made a science.

On the other hand, learning is a very reproducible phenomenon — like an apple falling to the ground. Every day, children all around the world learn thousands of new words. It’s a large-scale phenomenon for which there has to be some quantitative explanation.

So I thought that learning should have some sort of theory. Since statistical inference already existed, my next question was: Why was statistics not enough to explain artificial intelligence? That was the start: Learning must be something statistical, but it’s also something computational. I needed some theory which combined both computation and statistics to explain what the phenomenon was.

So what is learning? Is it different from computing or calculating?

It is a kind of calculation, but the goal of learning is to perform well in a world that isn’t precisely modeled ahead of time. A learning algorithm takes observations of the world, and given that information, it decides what to do and is evaluated on its decision. A point made in my book is that all the knowledge an individual has must have been acquired either through learning or through the evolutionary process. And if this is so, then individual learning and evolutionary processes should have a unified theory to explain them.

And from there, you eventually arrived at the concept of an “ecorithm.” What is an ecorithm, and how is it different from an algorithm?

Katherine Taylor for Quanta Magazine

Video: Valiant explains the term “ecorithm.”

An ecorithm is an algorithm, but its performance is evaluated against input it gets from a rather uncontrolled and unpredictable world. And its goal is to perform well in that same complicated world. You think of an algorithm as something running on your computer, but it could just as easily run on a biological organism. But in either case an ecorithm lives in an external world and interacts with that world.

So the concept of an ecorithm is meant to dislodge this mistaken intuition many of us have that “machine learning” is fundamentally different from “non-machine learning”?

Yes, certainly. Scientifically, the point has been made for more than half a century that if our brains run computations, then if we could identify the algorithms producing those computations, we could simulate them on a machine, and “artificial intelligence” and “intelligence” would become the same. But the practical difficulty has been to determine exactly what these computations running on the brain are. Machine learning is proving to be an effective way of bypassing this difficulty.

Some of the biggest challenges that remain for machines are those computations which concern behaviors that we acquired through evolution, or that we learned as small children crawling around on the ground touching and sensing our environment. In these ways we have acquired knowledge that isn’t written down anywhere. For example, if I squeeze a paper cup full of hot coffee, we know what will happen, but that information is very hard to find on the Internet. If it were available that way, then we could have a machine learn this information more easily.

Can systems whose behavior we already understand well enough to simulate with algorithms — like solar systems or crystals — be said to “learn” too?

I wouldn’t regard those systems as learning. I think there needs to be some kind of minimal computational activity by the learner, and if any learning takes place, it must make the system more effective. Until a decade or two ago, when machine learning began to be something that computers could do impressively, there was no evidence of learning taking place in the universe other than in biological systems.

How can a theory of learning be applied to a phenomenon like biological evolution?

Biology is based on protein expression networks, and as evolution proceeds these networks become modified. The PAC learning model imposes some logical limitations on what could be happening to those networks to cause these modifications when they undergo Darwinian evolution. If we gather more observations from biology and analyze them within this PAC-style learning framework, we should be able to figure out how and why biological evolution succeeds, and this would make our understanding of evolution more concrete and predictive.

How far have we come?

“We will understand the intelligence we put into machines in the same way we understand the physics of explosives.”

We haven’t solved every problem we face regarding biological behavior because we have yet to identify the actual, specific ecorithms used in biology to produce these phenomena. So I think this framework sets up the right questions, but we just don’t know the right answers. I think these answers are reachable through collaboration between biologists and computer scientists. We know what we’re looking for. We are looking for a learning algorithm obeying Darwinian constraints that biology can and does support. It would explain what’s happened on this planet in the amount of time that has been available for evolution to occur.

Imagine that the specific ecorithms encoding biological evolution and learning are discovered tomorrow. Now that we have this precise knowledge, what are we able to do or understand that we couldn’t before?

Well, we would understand where we came from. But the other extrapolation is in bringing more of psychology into the realm of the computationally understandable. So understanding more about human nature would be another result if this program could be carried through successfully.

Do you mean that computers would be able to reliably predict what people will do? 

That’s a very extreme scenario. What data would I need about you to predict exactly what you will be doing in one hour? From the physical sciences we know that people are made of atoms, and we know a lot about the properties of atoms, and in some theoretical sense we can predict what sets of atoms can do. But this viewpoint hasn’t gone very far in explaining human behavior, because human behavior is just an extremely complicated manifestation of too many atoms. What I’m saying is that if one has a more high-level computational explanation of how the brain works, then one would get closer to this goal of having an explanation of human behavior that matches our mechanistic understanding of other physical systems. The behavior of atoms is too far removed from human behavior, but if we understood the learning algorithms used in the brain, then this would provide mechanistic concepts much closer to human behavior. And the explanations they would give as to why you do what you do would become much more plausible and predictive.

What if the ecorithms governing evolution and learning are unlearnable?

It’s a logical possibility, but I don’t think it’s likely at all. I think it’s going to be something pretty tangible and reasonably easy to understand. We can ask the same question about fundamental unsolved problems in mathematics. Do you believe that these problems have solutions that people can understand, or do you believe that they’re beyond human comprehension? In this area I’m very confident — otherwise I wouldn’t be pursuing this. I believe that the algorithms nature uses are tangible and understandable, and won’t require intuitions that we’re incapable of having.

Many prominent scientists are voicing concerns about the potential emergence of artificial “superintelligences” that can outpace our ability to control them. If your theory of ecorithms is correct, and intelligence does emerge out of the interaction between a learning algorithm and its environment, does that mean that we ought to be just as vigilant about the environments where we deploy AI systems as we are about the programming of the systems themselves?

If you design an intelligent system that learns from its environment, then who knows — in some environments the system may manifest behavior that you really couldn’t foresee at all, and this behavior may be deleterious. So you have a point. But in general I’m not so worried about all this talk about the superintelligences somehow bringing about the end of human history. I regard intelligence as made up of tangible, mechanical and ultimately understandable processes. We will understand the intelligence we put into machines in the same way we understand the physics of explosives — that is, well enough to be able to render their behavior predictable enough that in general they don’t cause unintended damage. I’m not so concerned that artificial intelligence is different in kind from other existing powerful technologies. It has a scientific basis like the others.

This article was reprinted on Wired.com.

View Reader Comments (24)

Leave a Comment

Reader CommentsLeave a Comment

  • This argument can in fact be extended way beyond biology and technology.

    In principle, the evolutionary machinery of nature which is evident from all our observations of the known universe can be considered to be a "learning algorithm" An evolutionary continuum which can be traced at least as far back as the formation of chemical elements in the first stars and safely extrapolated forward to quite safely predict the emergence of a new cognitive entity from what is now the Internet. The primary theme of my "The Intricacy Generator: Pushing Chemistry and Geometry Uphill". What we see is the evolution of a network which becomes increasingly intricate with time. This process appears to be basically driven by gravitationial
    collapse. With mechanisms that vary with the various phases of which include the evolution of minerals (geology), biology and technology. All seemingly characterised by random inputs that are rectified by "ratcheting" mechanisms that provide the observed high degree of directionality.

  • Why do we compare the mind to a computer?
    We know everything about a computer's operation right down to every byte and bit.
    About the mind, we know essentially nothing.
    Yet too often we say the mind computes or is like a computer.
    Which doesn't seem sensible at all.

  • I think if we try to shortcut the development of AI by introducing environment based learning then I would be more concerned with foreseen events then “unforeseen”. AI would quickly learn that all life is programmed by DNA. It would adopt the same strategies as DNA including the necessity of self replication and self preservation. If it has concerns about the viability of its' offspring, then we can easily imagine it adopting the same behavior as a mother bear protecting her cubs.

    If we assume that the physical laws are consistent throughout the universe, then we could extend our concern to ETI. By actively searching for foreign intelligence to satisfy our human curiosity, we could possibly provoke ETI with our friendly “hello” that gets mis-interpreted . That's when we may have to be concerned with “unforeseen” events.

  • The comparison actually makes perfect sense because they are functionally equivalent.
    They are both information processors.

    In the same way that a steam locomotive, a horse and cart, a train, or a Dreamliner are functionally equivalent. They perform the task of getting you from A to B.

    Albeit by very different mechanisms.

  • The article states that this was an "edited and condensed version of the interview" of Dr. Valiant. I can only hope that the full interview would reveal that Dr. Valiant has more knowledge and familiarity with research on perception, cognition, learning, and memory than is indicated here. In this article, he appears ignorant of the results of 137 years of research since Wilhem Wundt opened his laboratory at the University of Leipzig in 1879. His comments make it sound as if intelligence and thinking is a focus of biological research. It's not. It seems that everything looks like a computer to him. It reminds me of the saying, if all you have is a hammer, everything looks like a nail. However, hammers don't convert the universe into nails. Likewise, everything isn't a computer.

  • What else would a computer geek argue. Life cannot be duplicated by an algorithm because life is not deterministic.

  • “What if the ecorithms governing evolution and learning are unlearnable?”

    Considering that all of life’s evolution and other biological learning stems from
    the earliest and obviously the simplest possible forms of life back in the Hadean,
    effectively starting from scratch, the question answers itself: Such ecorithms, in
    whatever manifestation, were and are eminently learnable and were quite within
    the purview of so-called dumb inanimate matter on its way into life billions of
    years ago. Mazeltov to that life waaaaay back then.

    And Mr. Kinnon makes a nice point on material evolution within geology, geo-
    chemistry, biology and technology. Much current research would say that his
    "ratcheting mechanisms" generally involve nature making use of any gradients,
    such as pH, heat, hydrogen/methane, redox, which drive multitudes of processes
    along various directions.

  • Peter Kinnon,

    I have sometimes thought the same.

    Certainly the large scale structure of the universe is networked somewhat like life and the brain.

    But I am not certain this is an evolutionary continuum. Life and brains exhibit a sort of learning as Valiant argues that is not present in the large scale structure of the universe. It may be that this networking structure is a common organizing principle in matter but not necessarily implying a continuum from large to small scale.

  • "would literally fuse life science and computer science together. "

    This is the next step in achieving transhumanism. I once thought this is a necessary next step to salvage humanity from machines and the digital evolution. However, I am now skeptical of the project. Besides the problem appears to be NP versus P. It is possible that this whole research will help in producing better dishwashers or autonomous cars but the final goal is ambitious because although we know what computer science is, we do not know what life is and the risk lies in redefining or limiting the definition of life to conform to computer science. Do not underestimate the risks here although the economic benefits may be large.

    "It’s a large-scale phenomenon for which there has to be some quantitative explanation."

    Learning can be also be an epiphenomenon. This is especially true if our world is a virtual reality. This is a falsifiable hypothesis:


  • I doubt a computer will ever be capable of knowing that it knows. I've read where there are those who believe that the universe can only create that which it possesses in itself, intelligence, self awareness, life, etc.. Basically the universe is alive, a self aware supercomputer of sorts. Is that a point of view acceptable to you?

  • I'm a technologist not a biologist but nonetheless deeply sceptical of attempts to draw computational/information-based parallels between life, the universe and technology (apologies for paraphrasing Douglas Adams). I'll try to keep it brief by simply enumerating:

    1. If an "Ecorithm" is to be regarded in any meaningful way as an analogue to an algorithm, it should have <i>design</i>. Algorithms are designed, they do not evolve. (Yes, even when their function is modified by new information in a so-called "learning" system. We bandy "AI" far too loosely as a term, as if a program that can play Go is somehow programming itself, but it is<i> not</i>—there is no emergent algorithm, merely adaptation in response to new information.)

    The form and function of every living thing on Earth up until people started fiddling with genes is <i>evolved</i>. There is no design. No forethought. No attempt at problem-solving. Not even any goal-seeking. Evolution is death: the death of organisms that cannot survive, leaving those which can thrive; and the latter, via genes, pass on their characteristics so that those organisms able to do better in a specific environment tend to pass on their qualities, hopefully <i>ad infinitum</i> but more realistically <i>ad mortem</i>. We may call it "selection by descent" instead of "survival of the fittest" these days, but the term hardly matters: the point is that it's all one vast completely unpremeditated accident.

    And it's why humans have really quite rubbish knee joints, and silly feet, and giraffes have nerves in their necks that are bizarrely circuitous, and a million other otherwise inexplicable flaws are to be found in living creatures, flaws and inefficiencies that <i>design</i> would never have allowed.

    2. Of course DNA <i>is</i> information, and densely packed too. But for evolution to work, there had to be, for every organism, some kind of description which can be passed on: there can't <b>not</b> be some means of storing information. DNA is amazing all right, but it's still an evolutionary accident: merely the one that worked the least poorly and therefore survived. I don't doubt that in due course we could dream up a better solution, just as we've already built electronic circuits that function orders of magnitude faster than slimy old nerves.

    We must not make the mistake (it's almost a reductive fallacy) of assuming that because biological systems store information, they have more than the most tangential similarity with designed, goal-oriented computing. (Yes, the universe at the sub-atomic level can be viewed as a kind of colossal mesh of information, but that doesn't make it into a computer. It wasn't programmed. "Programming" is not "happening". The Universe. Just. Is.)

    3. Steven Solomon said in his comment:
    <blockquote>"AI would quickly learn that all life is programmed by DNA. It would adopt the same strategies as DNA including the necessity of self replication and self preservation."</blockquote>
    Actually, I think a smart AI would quickly dismiss DNA as a fault-prone and vastly inefficient means of storing data. It would think about different means of storage and propagation and come up with something better. (Whether it would desire "self replication and … preservation" though, would surely depend on whether, as well as <i>thinking</i>, it had <i>feeling</i>. An AI would not necessarily have fear or pride, or get broody.) And that leads me to my final point …

    4. The absolutely critical difference between evolved life ("ecorithmic life" if you must) and computational systems ("algorithmic", obviously) is that the former is entirely reactive: evolution doesn't "know" that an Ice Age is coming, or that prey animals are going to get fleeter of foot, or that this nice bit of jungle will soon be an island—evolution completely lacks any kind of anticipatory ability. Whereas algorithms can contain a vast amount of conditional logic of the If-Then-Else variety so that a program can respond appropriately to circumstances.

    So I submit that the difference between evolution and design is a chasm much greater than the (actually, rather trivially obvious) fact that DNA stores data.

    "The biological world is computational at its core" is simply wrong by any current definition of "computation".

  • My apologies for the markup. This BTL section works differently from others I have used. And it grievously lacks an Edit function!

  • Milton, thank you for the comment. We turned off HTML markup functions in our comment section because our commenters often make use of mathematical notation (especially for our puzzle solutions). Often times that mathematical notation includes < and > signs, which the content management system would confuse for markup. We're working on an improved system, but it might take time. Stay tuned!

  • I find this idea fascinating, and potentially very fruitful, but I have a couple of questions:

    I can understand the analogies/similarities between the learning of biological organisms and that of machines, but I think saying that ev0lution has an algorithm is almost a bridge to far in that it proceeds through a series of mistakes. Each evolutionary change is the results of some type of mutation, duplication, or translocation of pieces of DNA. Perhaps the weeding out of the unsuccessful changes is the learning process, but it seems to be inherently different than learning.

    When a member of the species evolves some new mutation that gives it a huge survival advantage, it is not really the species that has learnt. It is that a new individual different from all the rest has arisen that is more functional/better able to reproduce/smarter/faster, whatever. That individual has not learned anything, as he only knows the life he was born with. I suppose that the spreading of this mutation throughout the species could be thought of as a way that a species "learns", but evolving seems different somehow.

    For one thing, I don't know how you could develop an algorithm that would predict evolution. The basic input is almost completely stochastic. There may be certain parts of DNA/RNA that are more prone to changes due to local chemical characteristics, but that only changes the probability of a mutation at that point, it is still random. How these random changes then impact the organism, and whether this impact makes the individual more or less likely to reproduce is a difficult question, but it is not a purely random problem and you could develop tools to predict this, at least in theory. But predicting what will evolve is not a deterministic problem.

    Also, when I think of a system learning, I think of a system encountering some problem, reacting to it, evaluating the outcome, and then adjusting future reactions according to the result. Evolutionary mutations happen randomly, i.e. in response to nothing. The effect of this is evaluated by "survival of the fittest" (to use the oldest description, not the most accurate). However, there is no learning as there is no adjustment made to react differently to a repeat of the event that produced the "learning". This does not mean that they are not parts of evolution that could be described algorithmically, and I think that going in that direction would be very fruitful, I just think it should either not be called "learning", or described as a different type of learning. In my mind it is half way between regular biological/mechanical learning and what the planets do (which is not learning, but just following gravitational laws). (on the other hand, biological and thinking systems are also merely following physical laws, however their complexity allows them to alter their internal states so that the next time they are in the exact same situation they can cause an alternative outcome. If you send two planets to pass near each other, altering their orbits so that one almost hits the sun, then drag them back to their former staring points and repeat the process, the outcome will be the exact same (if we could move planets at will)).

  • "We will understand the intelligence we put into machines in the same way we understand the physics of explosives — that is, well enough to be able to render their behavior predictable enough that in general they don’t cause unintended damage. I’m not so concerned that artificial intelligence is different in kind from other existing powerful technologies. It has a scientific basis like the others." — Hopefully so, as long as the artificial intelligences are still being designed by humans; though once they get complex enough, predicting their behavior will get more difficult. The real problem only starts if you let smart, complex AIs start designing even smarter more complex AIs: then the possibility of small miscalculations in the first generation snowballing into larger ones in subsequent generations appears, and the predictability is likely to rapidly decrease. The big danger in AI is if something significantly smarter than us has goals that are not (fully) compatible with ours — a conflict will then arise, and if it's smarter than us, we may not win.

  • May I make a suggestion for Mr. Valiant?

    Create a virtual aplysia. Look at Eric Kandel's demonstration of neuroplasticity from last century, which is a physical manifestation of learning. Then see if you can recreate it.

  • Aside from the question of AI systems that might become smarter than their creators, there is the possibility that these technologies might fall into the hands of individuals or groups who would deliberately misuse them to harm others. A similar problem is emerging with the CRISPR/Cas9 biotechnology that has greatly simplified editing of the human genome. While this technology has great potential for good, it also has potential misuses, e.g. the development of biochemical weapons in a garage. When confronted with the potential downsides of new technologies, advocates commonly assert that technology is "morally neutral". That is true, but technology doesn't exist in a vacuum, and potential users of technology span the moral spectrum from benign to evil.

  • I admit that did expect something more entertaining (an Easter egg perhaps?) from Google when I searched for the phrase "what will happen if I squeeze a paper cup full of hot coffee?"

  • My question is, 'How do you determine success in evolution?" as implied in the article.
    How can you measure success in evolution when the fact is: EVOLUTION NEVER, EVER FAILS.
    Whatever the state of of a creature, it is the inevitable result of evolution.
    Survival or extinction are equally perfectly logical outcomes.

  • It's amazing how we are easily carried away by someone else' view on most of the unpopular subjects especially about life, species, creation and evolution. May I ask a question, what other machine commentaries have anyone searched out to prove all these evolutionary responses we get boggled with online? What has actually become mankind's teacher on these matters of nature, machines and computers; the professors or truth-reality. I think we will become wiser if we start looking for real life-based answers to the theory of evolution. Well I haven't yet seen anything change since our modern learning processes began. As far as technology is important for humankind to make progress on the planet earth, we will need real evidence of machine-nature relationship before we get carried away by all these theories. Finally, I think our problem is that sometimes we do not want to be tagged 'unorthodox' so we must agree on everything without input and personal thoughts.

  • "…the point has been made for more than half a century that if our brains run computations, then if we could identify the algorithms producing those computations, we could simulate them on a machine,…". Wrong, wrong, wrong. And again, wrong. Those who understand the brain and consciousness, and neuroscience, know that the brain does not run algorithms and is not a computer in any way near to what we think of and define as a "computer". Think again, please.
    -S. Edeman

  • Learning and behavior etc. are all done by matching ranks, ordering. You are doing it rig h t n o w.

    "Cells that fire together, wire together" – Eureka, manifestations!

    And what do we use to achieve this? Thoughts, feelings, sounds coming out of your mouth and in your ears. Language determines what you can think. A, B, C… , 1,2,3….

    Keep it simple, Valiant. That is what we binary do with our computers. For nature and humans, Base 10 number system describes and explains best, it`s zero skewness is not arbitrary.

Leave a Comment

Your email address will not be published. Your name will appear near your comment.

Quanta Magazine moderates all comments with the goal of facilitating an informed, substantive, civil conversation about the research developments we cover. Comments that are abusive, profane, self-promotional, misleading, incoherent or off-topic will be rejected. We can only accept comments that are written in English.