artificial intelligence

Self-Assembly Gets Automated in Reverse of ‘Game of Life’

In cellular automata, simple rules create elaborate structures. Now researchers can start with the structures and reverse-engineer the rules.

Neural cellular automata self-assemble into whatever shape you desire.

Michael Waraksa for Quanta Magazine

Introduction

Alexander Mordvintsev showed me two clumps of pixels on his screen. They pulsed, grew and blossomed into monarch butterflies. As the two butterflies grew, they smashed into each other, and one got the worst of it; its wing withered away. But just as it seemed like a goner, the mutilated butterfly did a kind of backflip and grew a new wing like a salamander regrowing a lost leg.

Mordvintsev, a research scientist at Google Research in Zurich, had not deliberately bred his virtual butterflies to regenerate lost body parts; it happened spontaneously. That was his first inkling, he said, that he was onto something. His project built on a decades-old tradition of creating cellular automata: miniature, chessboard-like computational worlds governed by bare-bones rules. The most famous, the Game of Life, first popularized in 1970, has captivated generations of computer scientists, biologists and physicists, who see it as a metaphor for how a few basic laws of physics can give rise to the vast diversity of the natural world.

In 2020, Mordvintsev brought this into the era of deep learning by creating neural cellular automata, or NCAs. Instead of starting with rules and applying them to see what happened, his approach started with a desired pattern and figured out what simple rules would produce it. “I wanted to reverse this process: to say that here is my objective,” he said. With this inversion, he has made it possible to do “complexity engineering,” as the physicist and cellular-automata researcher Stephen Wolfram proposed in 1986 — namely, to program the building blocks of a system so that they will self-assemble into whatever form you want. “Imagine you want to build a cathedral, but you don’t design a cathedral,” Mordvintsev said. “You design a brick. What shape should your brick be that, if you take a lot of them and shake them long enough, they build a cathedral for you?”

Such a brick sounds almost magical, but biology is replete with examples of basically that. A starling murmuration or ant colony acts as a coherent whole, and scientists have postulated simple rules that, if each bird or ant follows them, explain the collective behavior.  Similarly, the cells of your body play off one another to shape themselves into a single organism. NCAs are a model for that process, except that they start with the collective behavior and automatically arrive at the rules.

Alexander Mordvintsev created complex cell-based digital systems that use only neighbor-to-neighbor communication.

Courtesy of Alexander Mordvintsev

The possibilities this presents are potentially boundless. If biologists can figure out how Mordvintsev’s butterfly can so ingeniously regenerate a wing, maybe doctors can coax our bodies to regrow a lost limb. For engineers, who often find inspiration in biology, these NCAs are a potential new model for creating fully distributed computers that perform a task without central coordination. In some ways, NCAs may be innately better at problem-solving than neural networks.

Life’s Dreams

Mordvintsev was born in 1985 and grew up in the Russian city of Miass, on the eastern flanks of the Ural Mountains. He taught himself to code on a Soviet-era IBM PC clone by writing simulations of planetary dynamics, gas diffusion and ant colonies. “The idea that you can create a tiny universe inside your computer and then let it run, and have this simulated reality where you have full control, always fascinated me,” he said.

He landed a job at Google’s lab in Zurich in 2014, just as a new image-recognition technology based on multilayer, or “deep,” neural networks was sweeping the tech industry. For all their power, these systems were (and arguably still are) troublingly inscrutable. “I realized that, OK, I need to figure out how it works,” he said.

He came up with “deep dreaming,” a process that takes whatever patterns a neural network discerns in an image, then exaggerates them for effect. For a while, the phantasmagoria that resulted — ordinary photos turned into a psychedelic trip of dog snouts, fish scales and parrot feathers — filled the internet. Mordvintsev became an instant software celebrity.

Among the many scientists who reached out to him was Michael Levin of Tufts University, a leading developmental biologist. If neural networks are inscrutable, so are biological organisms, and Levin was curious whether something like deep dreaming might help to make sense of them, too. Levin’s email reawakened Mordvintsev’s fascination with simulating nature, especially with cellular automata.

From a single cell, this neural cellular automata transforms into the shape of a lizard.

The core innovation made by Mordvintsev, Levin and two other Google researchers, Ettore Randazzo and Eyvind Niklasson, was to use a neural network to define the physics of the cellular automaton. In the Game of Life (or just “Life” as it’s commonly called), each cell in the grid is either alive or dead and, at each tick of the simulation clock, either spawns, dies or stays as is. The rules for how each cell behaves appear as a list of conditions: “If a cell has more than three neighbors, it dies,” for example. In Mordvintsev’s system, the neural network takes over that function. Based on the current condition of a cell and its neighbors, the network tells you what will happen to that cell. The same type of network is used to classify an image as, say, a dog or cat, but here it classifies the state of cells. Moreover, you don’t need to specify the rules yourself; the neural network can learn them during the training process.

To start training, you seed the automaton with a single “live” cell. Then you use the network to update the cells over and over again for dozens to thousands of times. You compare the resulting pattern to the desired one. The first time you do this, the result will look nothing like what you intended. So you adjust the neural network’s parameters, rerun the network to see whether it does any better now, make further adjustments, and repeat. If rules exist that can generate the pattern, this procedure should eventually find them.

The adjustments can be made using either backpropagation, the technique that powers most modern deep learning, or a genetic algorithm, an older technique that mimics Darwinian evolution. Backpropagation is much faster, but it doesn’t work in every situation, and it required Mordvintsev to adapt the traditional design of cellular automata. Cell states in Life are binary — dead or alive — and transitions from one state to the other are abrupt jumps, whereas backpropagation demands that all transitions be smooth. So he adopted an approach developed by, among others, Bert Chan at Google’s Tokyo lab in the mid-2010s. Mordvintsev made the cell states continuous values, anything from 0 to 1, so they are never strictly dead or alive, but always somewhere in between.

Mordvintsev also found that he had to endow each cell with “hidden” variables, which do not indicate whether that cell is alive or dead, or what type of cell it is, but nonetheless guide its development. “If you don’t do that, it just doesn’t work,” he said. In addition, he noted that if all the cells updated at the same time, as in Life, the resulting patterns lacked the organic quality he was seeking. “It looked very unnatural,” he said. So he began to update at random intervals.

Finally, he made his neural network fairly beefy — 8,000 parameters. On the face of it, that seems perplexing. A direct translation of Life into a neural network would require just 25 parameters, according to simulations done in 2020 by Jacob Springer, who is now a doctoral student at Carnegie Mellon University, and Garrett Kenyon of Los Alamos National Laboratory. But deep learning practitioners often have to supersize their networks, because learning to perform a task is harder than actually performing it.

Moreover, extra parameters mean extra capability. Although Life can generate immensely rich behaviors, Mordvintsev’s monsters reached another level entirely.

Fixer Upper

The paper that introduced NCAs to the world in 2020 included an applet that generated the image of a green lizard. If you swept your mouse through the lizard’s body, you left a trail of erased pixels, but the animal pattern soon rebuilt itself. The power of NCAs not just to create patterns, but to re-create them if they got damaged, entranced biologists. “NCAs have an amazing potential for regeneration,” said Ricard Solé of the Institute of Evolutionary Biology in Barcelona, who was not directly involved in the work.

The butterfly and lizard images are not realistic animal simulations; they do not have hearts, nerves or muscles. They are simply colorful patterns of cells in the shape of an animal. But Levin and others said they do capture key aspects of morphogenesis, the process whereby biological cells form themselves into tissues and bodies. Each cell in a cellular automaton responds only to its neighbors; it does not fall into place under the direction of a master blueprint. Broadly, the same is true of living cells. And if cells can self-organize, it stands to reason that they can self-reorganize.

Cut off the tail of an NCA lizard and the form will regenerate itself.

Sometimes, Mordvintsev found, regeneration came for free. If the rules shaped single pixels into a lizard, they also shaped a lizard with a big gash through it into an intact animal again. Other times, he expressly trained his network to regenerate. He deliberately damaged a pattern and tweaked the rules until the system was able to recover. Redundancy was one way to achieve robustness. For example, if trained to guard against damage to the animal’s eyes, a system might grow backup copies. “It couldn’t make eyes stable enough, so they started proliferating — like, you had three eyes,” he said.

Sebastian Risi, a computer scientist at the IT University of Copenhagen, has sought to understand what exactly gives NCAs their regenerative powers. One factor, he said, is the unpredictability that Mordvintsev built into the automaton through features such as random update intervals. This unpredictability forces the system to develop mechanisms to cope with whatever life throws at it, so it will take the loss of a body part in stride. A similar principle holds for natural species. “Biological systems are so robust because the substrate they work on is so noisy,” Risi said.

Last year, Risi, Levin and Ben Hartl, a physicist at Tufts and the Vienna University of Technology, used NCAs to investigate how noise leads to robustness. They added one feature to the usual NCA architecture: a memory. This system could reproduce a desired pattern either by adjusting the network parameters or by storing it pixel-by-pixel in its memory. The researchers trained it under various conditions to see which method it adopted.

If all the system had to do was reproduce a pattern, it opted for memorization; fussing with the neural network would have been overkill. But when the researchers added noise to the training process, the network came into play, since it could develop ways to resist noise. And when the researchers switched the target pattern, the network was able to learn it much more rapidly because it had developed transferable skills such as drawing lines, whereas the memorization approach had to start from scratch. In short, systems that are resilient to noise are more flexible in general.

Even if disturbed, the textures created by NCAs have the ability to heal themselves.

The researchers argued that their setup is a model for natural evolution. The genome does not prescribe the shape of an organism directly; instead, it specifies a mechanism that generates the shape. That enables species to adapt more quickly to new situations, since they can repurpose existing capabilities. “This can tremendously speed up an evolutionary process,” Hartl said.

Ken Stanley, an artificial intelligence researcher at Lila Sciences who has studied computational and natural evolution, cautioned that NCAs, powerful though they are, are still an imperfect model for biology. Unlike machine learning, natural evolution does not work toward a specific goal. “It’s not like there was an ideal form of a fish or something which was somehow shown to evolution, and then it figured out how to encode a fish,” he noted. So the lessons from NCAs may not carry over to nature.

Auto Code

In regenerating lost body parts, NCAs demonstrate a kind of problem-solving capability, and Mordvintsev argues that they could be a new model for computation in general. Automata may form visual patterns, but their cell states are ultimately just numerical values processed according to an algorithm. Under the right conditions, a cellular automaton is as fully general as any other type of computer.

The standard model of a computer, developed by John von Neumann in the 1940s, is a central processing unit combined with memory; it executes a series of instructions one after another. Neural networks are a second architecture that distributes computation and memory storage over thousands to billions of interconnected units operating in parallel. Cellular automata are like that, but even more radically distributed. Each cell is linked only to its neighbors, lacking the long-range connections that are found in both the von Neumann and the neural network architectures. (Mordvintsev’s neural cellular automata incorporate a smallish neural network into each cell, but cells still communicate only with their neighbors.)

Long-range connections are a major power drain, so if a cellular automaton could do the job of those other systems, it would save energy. “A kind of computer that looks like an NCA instead would be a vastly more efficient kind of computer,” said Blaise Agüera y Arcas, the chief technology officer of the Technology and Society division at Google.

But how do you write code for such a system? “What you really need to do is come up with [relevant] abstractions, which is what programming languages do for von Neumann–style computation,” said Melanie Mitchell of the Santa Fe Institute. “But we don’t really know how to do that for these massively distributed parallel computations.”

A neural network is not programmed per se. The network acquires its function through a training process. In the 1990s Mitchell, Jim Crutchfield of the University of California, Davis, and Peter Hraber at the Santa Fe Institute showed how cellular automata could do the same. Using a genetic algorithm, they trained automata to perform a particular computational operation, the majority operation: If a majority of the cells are dead, the rest should die too, and if the majority are alive, all the dead cells should come back to life. The cells had to do this without any way to see the big picture. Each could tell how many of its neighbors were alive and how many were dead, but it couldn’t see beyond that. During training, the system spontaneously developed a new computational paradigm. Regions of dead or living cells enlarged or contracted, so that whichever predominated eventually took over the entire automaton. “They came up with a really interesting algorithm, if you want to call it an algorithm,” Mitchell said.

She and her co-authors didn’t develop these ideas further, but Mordvintsev’s system has reinvigorated the programming of cellular automata. In 2020 he and his colleagues created an NCA that read handwritten digits, a classic machine learning test case. If you draw a digit within the automaton, the cells gradually change in color until they all have the same color, identifying the digit. This year, Gabriel Béna of Imperial College London and his authors, building on unpublished work by the software engineer Peter Whidden, created algorithms for matrix multiplication and other mathematical operations. “You can see by eye that it’s learned to do actual matrix multiplication,” Béna said.

Stefano Nichele, a professor at Østfold University College in Norway who specializes in unconventional computer architectures, and his co-authors recently adapted NCAs to solve problems from the Abstraction and Reasoning Corpus, a machine learning benchmark aimed at measuring progress toward general intelligence. These problems look like a classic IQ test. Many consist of pairs of line drawings; you have to figure out how the first drawing is transformed into the second and then apply that rule to a new example. For instance, the first might be a short diagonal line and the second a longer diagonal line, so the rule is to extend the line.

Neural networks typically do horribly, because they are apt to memorize the arrangement of pixels rather than extract the rule. A cellular automaton can’t memorize because, lacking long-range connections, it can’t take in the whole image at once. In the above example, it can’t see that one line is longer than the other. The only way it can relate them is to go through a process of growing the first line to match the second. So it automatically discerns a rule, and that enables it to handle new examples. “You are forcing it not to memorize that answer, but to learn a process to develop the solution,” Nichele said.

Other researchers are starting to use NCAs to program robot swarms. Robot collectives were envisioned by science fiction writers such as Stanisłav Lem in the 1960s and started to become reality in the ’90s. Josh Bongard, a robotics researcher at the University of Vermont, said NCAs could design robots that work so closely together that they cease to be a mere swarm and become a unified organism. “You imagine, like, a writhing ball of insects or bugs or cells,” he said. “They’re crawling over each other and remodeling all the time. That’s what multicellularity is really like. And it seems — I mean, it’s still early days — but it seems like that might be a good way to go for robotics.”

To that end, Hartl, Levin and Andreas Zöttl, a physicist at the University of Vienna, have trained virtual robots — a string of beads in a simulated pond — to wriggle like a tadpole. “This is a super-robust architecture for letting them swim,” Hartl said.

For Mordvintsev, the crossover between biology, computers and robots continues a tradition dating to the early days of computing in the 1940s, when von Neumann and other pioneers freely borrowed ideas from living things. “To these people, the relation between self-organization, life and computing was obvious,” he said. “Those things somehow diverged, and now they are being reunified.”

Comment on this article