There are many patterns of collective behavior in biology that are easy to see because they occur along the familiar dimensions of space and time. Think of the murmuration of starlings. Or army ants that span gaps on the forest floor by linking their own bodies into bridges. Loose groups of shoaling fish that snap into tight schools when a predator shows up.
Then there are less obvious patterns, like those that the evolutionary biologist Jessica Flack tries to understand. In 2006, her graduate work at Emory University showed how just a few formidable-looking fighters could stabilize an entire group of macaques by intervening in scuffles between weaker monkeys, who would submit with teeth-baring grins rather than risk a fight they thought they would lose. But when Flack removed some of the police, the whole group became fractured and chaotic.
Like flocking or schooling, the policing behavior arises from individual interactions to produce a macroscopic effect on the entire ensemble. But it is subtler, perhaps harder to visualize and measure. Or, as Flack says of macaque society and many of the other systems she studies, “their metric space is a social coordinate space. It’s not Euclidean.”
Flack is now a professor at the Santa Fe Institute, where she has spent all of her postgraduate career, except for a stint at the University of Wisconsin, Madison. Her “collective computation” group, C4, which she co-runs with her collaborator, David Krakauer, probes not just macaques but neurons, slime molds and the internet for the rules that underlie each model, as well as the general rules underlying them all.
Flack describes her work as an investigation into three interlocking questions. She wants to understand how phenomenological rules in biology, which seem to work in aggregate, emerge from microscopic ground truths. She wants to understand how groups solve problems and come to decisions. And she wants to know how complex systems stay robust in the face of shocks, like the macaques with their own police force that acts as social glue.
At its root, though, Flack’s focus is on information: specifically, on how groups of different, error-prone actors variously succeed and fail at processing information together. “When I look at biological systems, what I see is that they are collective,” she said. “They are all made up of interacting components with only partly overlapping interests, who are noisy information processors dealing with noisy signals.”
Over the phone, by Skype and via email, Quanta Magazine caught up with Flack to ask about C4’s current projects, her own career path, and the overarching philosophy behind her work. An edited and condensed version of our conversations follows.
How did you get into research on problem solving in nature, and how did you wind up at the Santa Fe Institute?
I’ve always been interested in how nature solves problems and where patterns come from, and why everything seems so organized despite so many potential conflicts of interest. Those sorts of questions have been with me since I was really little.
At Cornell, I was taking evolutionary biology classes, but none of the material really addressed these questions. I would spend a lot of time in Mann Library, which was where all the good biology books were. So I would sit on the floor in the dusty, dimly lit stacks with this pile of books around me. And in that way I discovered that there was a community of people working on these questions in evolutionary biology that I found more interesting.
They weren’t in the mainstream. One of the main places that turned out to be home to a lot of these people was the Santa Fe Institute. This was in the early to mid-’90s. I emailed the Santa Fe Institute and I requested something like 40 working papers. I was being a really annoying undergraduate. And someone mailed them to me! They actually snail-mailed me 40 of these papers, and I was thrilled, and I read them all.
Now that you’ve ended up there, can you break down what your C4 research group means by “collective computation”?
Collective computation is about how adaptive systems solve problems. All systems are about extracting energy and doing work, and physical systems in particular are about that. When you move to adaptive systems, you’ve got the additional influence of information processing, which we think allows a system to extract energy more efficiently even though it has to expend a little extra energy to do the information processing. Components of adaptive systems look out at the world, and they try to discover the regularities. It’s a noisy process.
Unlike in computer science where you have a program you have written, which has to produce a desired output, in adaptive systems this is a process that is being refined over evolutionary or learning time. The system produces an output, and it might be a good output for the environment or it might not. And then over time it hopefully gets better and better.
What we are doing at C4 is taking messy, conceptually challenging problems and turning them into something rigorous. We’re very philosophically oriented, but we’re also very quantitative, particularly in thinking about how nature can overcome subjectivity in information processing through collective computation. We really think the answer to these questions requires combining insights from statistical physics, theoretical computer science, information theory, evolutionary biology and cognitive science.
Can you walk us through an example? In a recent paper, your group looked at communication between neurons in the brains of macaques.
The human brain contains roughly 86 billion neurons, making our brains the ultimate collectives. Every decision we make can be thought of as the outcome of a neural collective computation. In the case of our study, which was lead by my colleague Bryan Daniels, the data we analyzed were collected during an experiment by Bill Newsome’s group at Stanford from macaques who had to decide whether a group of dots moving across a screen was traveling left or right. Data on neural firing patterns were recorded while the monkey was performing this task. We found that as the monkey initially processes the data, a few single neurons have strong opinions about what the decision should be. But this is not enough: If we want to anticipate what the monkey will decide, we have to poll many neurons to get a good prediction of the monkey’s decision. Then, as the decision point approaches, this pattern shifts. The neurons start to agree, and eventually each one on its own is maximally predictive.
We have this principle of collective computation that seems to involve these two phases. The neurons go out and semi-independently collect information about the noisy input, and that’s like neural crowdsourcing. Then they come together and come to some consensus about what the decision should be. And this principle of information accumulation and consensus applies to some monkey societies also. The monkeys figure out sort of semi-independently who is capable of winning fights, and then they consolidate this information by exchanging special signals. The network of these signals then encodes how much consensus there is in the group about any one individual’s capacity to use force in fights.
I noticed that another recent paper uses the same macaque data set you produced during your graduate work at the Yerkes National Primate Research Center in Lawrenceville, Georgia. What did you find when you returned to thinking about this system?
We wanted to understand how social systems or other biological systems go from state A to state B. How a group of fish goes from shoaling to schooling, or how a social system goes from having a few super-powerful animals to a setup where there is less inequality. One mechanism known to facilitate switching between different states like this is for the system to sit near what’s called a critical or tipping point. We set out to find a way to measure, in biologically meaningful terms, how far a system sits from the critical point. Could we come up with units that mechanistically make sense?
We were interested in whether we could induce the monkey society we were studying to change from its status quo of many small fights and a few large ones to having many large fights. We observed that fights in this monkey group range in size from two to 30 or so individuals, with small fights common and large fights very rare. By simulating the society using data we had collected on fight-joining decisions, we found that we could measure the number of monkeys whose propensity to join fights would have to increase to move the system closer to the critical point.
In this system, it takes about three to five individuals to push the system over the edge. We also found that individuals vary in how much their behavior influences the system. If big contributors become more likely to join fights, the system moves toward the critical point where it is very sensitive, meaning a small perturbation can knock it over into this all-fight state. And while we didn’t study this in the paper, we speculate that the all-fight state, which means the system is going to change dramatically, might be useful. It might be something you want to do, to move toward the critical point and completely reconfigure the group if the environment is changing from known to unknown.
The macaques served as the model system for asking these questions, but we hope the approach that we developed can be applied to lots of other different kinds of data.
Human society also seems a little chaotic recently. Are you ever tempted to apply this kind of thinking in that direction?
Absolutely. With the help of some friends in finance and economics, we are moving a little bit into financial markets in our research. I think that’s an amazing model system for asking these kinds of collective computation questions. My next meeting today is about how to apply our criticality approach, coupled to new machine-learning results that are able to find phases of matter for physical systems, to either political data or market data. Our goals are to address whether there is evidence for phase transitions or critical phenomena in financial data and to understand the behavioral processes that might move markets closer to critical points.
Now that you can follow up on these kinds of questions to your heart’s content, what would you say if you could visit yourself back at Cornell, in the stacks of the library?
Jorge Luis Borges is one of my favorite writers, and he wrote something along the lines of “the worst labyrinth is not that intricate form that can trap us forever, but a single and precise straight line.” My path is not a straight line. It has been a quite interesting, labyrinthine path, and I guess I would say not to be afraid of that. You don’t know what you’re going to need, what tools or concepts you’re going to need. The thing is to read broadly and always keep learning.
Can you talk a bit about what it’s like to start with a table of raw data and pull these sorts of grand patterns out of it? Is there a single eureka moment, or just a slow realization?
Typically what happens is, we have some ideas, and our group discusses them, and then over months or years in our group meetings we sort of hash out these issues. We are ok with slow, thoughtful science. We tend to work on problems that are a little bit on the edge of science, and what we are doing is formalizing them. A lot of the discussion is: “What is the core problem, how do we simplify, what are the right measurements, what are the right variables, what is the right way to represent this problem mathematically?” It’s always a combination of the data, these discussions, and the math on the board that leads us to a representation of the problem that gives us traction.
We have this argument at the Santa Fe Institute a lot. Some people will say, “Well, at the end of the day it’s all math.” And I just don’t believe that. I believe that science sits at the intersection of these three things — the data, the discussions and the math. It is that triangulation — that’s what science is. And true understanding, if there is such a thing, comes only when we can do the translation between these three ways of representing the world.
Correction: This article was revised on July 6, 2017, to correct the caption of the video and to amend a statement about the professional role of David Krakauer in C4.
This article was reprinted on TheAtlantic.com.