Game Theory Calls Cooperation Into Question

A recent solution to the prisoner’s dilemma, a classic game theory scenario, has created new puzzles in evolutionary biology.

A vervet monkey will scream an alarm when a predator is nearby, putting itself in danger.

Matt Jenner

A vervet monkey will scream an alarm when a predator is nearby, putting itself in danger.


When the manuscript crossed his desk, Joshua Plotkin, a theoretical biologist at the University of Pennsylvania, was immediately intrigued. The physicist Freeman Dyson and the computer scientist William Press, both highly accomplished in their fields, had found a new solution to a famous, decades-old game theory scenario called the prisoner’s dilemma, in which players must decide whether to cheat or cooperate with a partner. The prisoner’s dilemma has long been used to help explain how cooperation might endure in nature. After all, natural selection is ruled by the survival of the fittest, so one might expect that selfish strategies benefiting the individual would be most likely to persist. But careful study of the prisoner’s dilemma revealed that organisms could act entirely in their own self-interest and still create a cooperative community.

Press and Dyson’s new solution to the problem, however, threw that rosy perspective into question. It suggested the best strategies were selfish ones that led to extortion, not cooperation.

Plotkin found the duo’s math remarkable in its elegance. But the outcome troubled him. Nature includes numerous examples of cooperative behavior. For example, vampire bats donate some of their blood meal to community members that fail to find prey. Some species of birds and social insects routinely help raise another’s brood. Even bacteria can cooperate, sticking to each other so that some may survive poison. If extortion reigns, what drives these and other acts of selflessness?

Candace diCarlo

Joshua Plotkin has applied the prisoner’s dilemma to evolving populations.

Press and Dyson’s paper looked at a classic game theory scenario — a pair of players engaged in repeated confrontation. Plotkin wanted to know if generosity could be revived if the same math was applied to a situation that more closely resembled nature. So he recast their approach in a population, allowing individuals to play a series of games with every other member of their group. The outcome of his experiments, the most recent of which was published in December in the Proceedings of the National Academy of Sciences, suggests that generosity and selfishness walk a precarious line. In some cases, cooperation triumphs. But shift just one variable, and extortion takes over once again. “We now have a very general explanation for when cooperation is expected, or not expected, to evolve in populations,” said Plotkin, who conducted the research along with his colleague Alexander Stewart.

The work is entirely theoretical at this point. But the findings could potentially have broad-reaching implications, explaining phenomena ranging from cooperation among complex organisms to the evolution of multicellularity — a form of cooperation among individual cells.

Plotkin and others say that Press and Dyson’s work could provide a new framework for studying the evolution of cooperation using game theory, allowing researchers to tease out the parameters that permit cooperation to exist. “It has basically revived this field,” said Martin Nowak, a biologist and mathematician at Harvard University.

Tit for Tat

Vervet monkeys are known for their alarm calls. A monkey will scream to warn its neighbors when a predator is nearby. But in doing so, it draws dangerous attention to itself. Scientists going back to Darwin have struggled to explain how this kind of altruistic behavior evolved. If a high enough percentage of screaming monkeys gets picked off by predators, natural selection would be expected to snuff out the screamers in the gene pool. Yet it does not, and speculation as to why has led to decades of (sometimes heated) debate.

Researchers have proposed different possible mechanisms to explain cooperation. Kin selection suggests that helping family members ultimately helps the individual. Group selection proposes that cooperative groups may be more likely to survive than uncooperative ones. And direct reciprocity posits that individuals benefit from helping someone who has helped them in the past.

The prisoner’s dilemma helps researchers understand the simple strategies, such as cooperating with generous community members and cheating the cheaters, that can create a cooperative society under the right conditions. First described in the 1950s, the classic prisoner’s dilemma involves a pair of felons who are arrested and placed in separate rooms. Each is given a choice: confess or stay silent. In the best outcome, both say nothing and go free. But since neither knows what the other will do, keeping quiet is risky. If one snitches and the other stays silent, the rat gets a lighter sentence while the quiet partner suffers.

Olena Shmahalo/Quanta Magazine

Even simple organisms, such as microbes, engage in these types of games. Some marine microorganisms produce molecules that help them gather iron, a vital nutrient. Microbial colonies often have both producers and cheaters — microbes that don’t make the compound themselves, but exploit their neighbors’ molecules.

In a single instance of the prisoner’s dilemma, the best strategy is to defect — squeal on your partner and you’ll get less time. But if the game repeats over and over, the optimal strategy changes. In a single encounter, a vervet monkey that spots a predator is safer if it stays silent. But over the course of a lifetime, the monkey is more likely to survive if it warns its neighbors of impending danger and they do the same. “Each player has the incentive to defect, but overall they will do better if they cooperate,” Plotkin said. “It’s a classic problem for how cooperation can emerge.”

In the 1970s, Robert Axelrod, a political scientist at the University of Michigan, launched a round-robin tournament pitting different strategies against each other. To the surprise of many contenders, the simplest approach won. Simply mimicking the other player’s previous move, a strategy called tit for tat, triumphed over much more sophisticated programs.

Tit-for-tat strategies can be found across the biological world. Pairs of stickleback fish, for example, scout nearby predators in a sort of tit-for-tat duet. If one fish makes the risky move of darting ahead, the other reciprocates with a similar act of bravery. If one hangs back, hoping to let its partner take the risk, the partner also drops back.

Over the last 30 years, scientists have explored more evolutionarily realistic versions of the prisoner’s dilemma than Axelrod’s simple version. Players in a large round-robin tournament start with a varied set of strategies — think of this as their genetically determined fitness. To mimic survival of the fittest, the winner of each interaction begets more offspring, which inherit the same strategy as their parent. The most successful strategies thus grow in popularity over time.

The winning approach depends on a variety of factors, including the size of the group, which strategies are present at the start, and how often players make mistakes. Indeed, adding noise to the game — a random change in strategy that acts as a stand-in for genetic mutation — ends the reign of tit for tat. Under these circumstances, a variant known as generous tit for tat, which involves occasionally forgiving another’s betrayal, triumphs.

The overall flavor of these simulations is optimistic — kindness pays. “The most successful strategies often tend to be the ones that don’t try to take advantage of another person,” Nowak said.

Enter Press and Dyson with a dark dose of despair.

Triumph of the Cooperator

Despite their impressive résumés, both Press and Dyson were relative newcomers to game theory. That made their new solution to the 60-year-old prisoner’s dilemma, published in the Proceedings of the National Academy of Sciences in 2012, even more unexpected. “It’s a remarkable paper that could well have been written 30 years ago,” Plotkin said. “The mathematical idea at the heart of their paper was overlooked, despite hundreds of scientists studying game theory and its applications.”

In the iterated prisoner’s dilemma, two players compete against each other in a series of rounds. Researchers can then determine which strategy is most successful in the long run. Below, the player in the left column employs a generous strategy, attempting to entice its opponent into helping by sometimes helping even when the opponent defects. The selfish player on the right tends to defect, only helping often enough to prevent its opponent from permanent defection. Each round is scored by using a matrix like the bat example above:

Olena Shmahalo/Quanta Magazine

In a head-to-head match, the selfish strategy defeats the generous one. Yet the same strategies have different outcomes when applied to a more evolutionarily realistic setting. In the video below, a population of players engages in a series of head-to-head encounters much like a round-robin tournament. The player that “wins” each encounter begets more offspring that employ similar strategies. Here, a single player that employs a generous strategy will tend to spread its strategy through the population:Ultimately the entire population converts from selfish to generous strategies. Biologists use models like this to explain how cooperative behavior persists in the wild.

Press and Dyson outlined an approach, dubbed extortion, in which one player could always win by choosing to defect according to a prescribed set of probabilities. Press and Dyson’s strategy is remarkable in that it allows one player to control the outcome of the game. “The main innovation is to calculate how often you can defect without demotivating your co-player completely,” said Christian Hilbe, a researcher in Nowak’s group at Harvard. Moreover, the winning player need only remember one previous move, but the strategy works just as well as those that incorporate many previous rounds of play.

The second player is forced to cooperate with the extortionist because that’s the option that provides the best payoff. “If I’m an extortionist, once in a while I’ll defect even though we cooperated, in precisely enough proportion that no matter what you do, I’ll have a higher payoff than you,” Plotkin said. The situation is reminiscent of a group project in junior high school. If one member of the team slacks off, the conscientious students have no choice but to work harder in order to earn a good grade.

Press and Dyson’s original paper was set in a classical game theory context — a series of interactions between a single pair of players. But Plotkin and Stewart wanted to know what would happen if they applied the same mathematical approach to an evolving group, such as vervet monkeys or vampire bats, who breed and survive based on their individual fitness. They explored the broader class of successful strategies, called zero-determinant strategies, that Press and Dyson had identified.

This class of strategies includes the moral opposite of extortion: generosity. In general, a player employing a generous strategy will always cooperate when his or her opponent does. If the opponent defects, the first player will still cooperate with a certain probability in an attempt to coax the opponent back to generosity.

To Plotkin and Stewart’s relief, generous strategies rather than the extortive ones were most successful when applied to evolving populations. “We found a much rosier picture,” said Plotkin, who published the results in 2013 in the Proceedings of the National Academy of Sciences. “The most robust strategies, the ones that can’t be replaced by other strategies, are generous.”

The basic intuition is simple. “Extortion does well with one opponent,” Plotkin said. “But in a large population, an extortioner will eventually pair up with another extortioner.” Then both will defect, getting a poorer payoff. “Plotkin improved our model by turning it upside down,” Dyson said. “If you want someone to cooperate with you, it’s better to bribe the person with short-range benefits than punishing him right away.”

Hilbe confirmed these findings in a real-world scenario, pitting human players against computers using either generous or extortionist strategies. As predicted, people won larger payouts when playing against generous computers than against selfish ones. But people also tended to punish extortionist opponents, refusing to cooperate even though it would be in their best interest to do so. That in turn reduced the payoff for both human player and computer. In the end, the generous computer won a larger payout than the extortionist computer.

The Extortionist’s Revenge

Given these outcomes, Plotkin hoped extortionists could be kept at bay. But that optimism was short-lived. Following his 2013 study, Plotkin changed the payoffs to be won by cooperating or defecting. Players passed both their strategy and the strategic payoffs to their offspring; both quantities might suffer random mutations.

With this shake-up to the system, which might correspond to a change in environmental conditions, the outcome returned to the dark side. Generosity was no longer the favored solution. “As mutations that increase the temptation to defect sweep through the group, the population reaches a tipping point,” Plotkin said. “The temptation to defect is overwhelming, and defection rules the day.”

Plotkin said the outcome was unexpected. “It’s surprising because it’s within the same framework — game theory — that people have used to explain cooperation,” he said. “I thought that even if you allowed the game to evolve, cooperation would still prevail.”

The takeaway is that small tweaks to the conditions can have a major effect on whether cooperation or extortion triumphs. “It’s quite neat to see that this leads to qualitatively different outcomes,” said Jeff Gore, a biophysicist at the Massachusetts Institute of Technology who wasn’t involved in the study. “Depending on the constraints, you can evolve qualitatively different kinds of games.”

Chris Adami, a computational biologist at Michigan State University, contends that there is no such thing as an optimal strategy — the winner depends on the conditions.

Indeed, Plotkin’s study is unlikely to be the end of the story. “I’m sure there will be people who look at how the result depends on the assumptions,” Hilbe said. “Perhaps cooperation can somehow be rescued.”

The Prisoner’s Future

The prisoner’s dilemma is obviously a highly simplified version of real interactions.

So how good a model is it for studying the evolution of cooperation? Dyson isn’t optimistic. He likes Plotkin’s and Hilbe’s studies, but mostly because they involve interesting mathematics. “Certainly as a description of possible worlds it’s quite interesting, but it doesn’t look to me like the world of biology,” Dyson said.

Ethan Akin, a mathematician who has explored strategies similar to Press and Dyson’s, said he thinks the results are more applicable to sociological decision making than to the evolution of cooperation.

But some experimental biologists disagree, saying that both the prisoner’s dilemma and game theory more broadly have had a profound effect on their field. “I think that the contribution of game theory to microbial cooperation is huge,” said Will Ratcliff, an evolutionary biologist at the Georgia Institute of Technology.

For example, scientists studying antibiotic resistance are using a game theory scenario called the snowdrift game, in which a player always benefits from cooperating. (If you’re stuck in your apartment building after a blizzard, you benefit by shoveling the driveway, but so does everyone else who lives there and doesn’t shovel.) Some bacteria can produce and secrete an enzyme capable of deactivating antibiotic drugs. The enzyme is costly to produce, and lazy bacteria that don’t make it can benefit by using enzymes produced by their more industrious neighbors. In a strict prisoner’s dilemma scenario, the slackers would eventually kill off the producers, harming the entire population. But in the snowdrift game, the producers have greater access to the enzyme, thus improving their fitness, and the two types of bacteria can coexist.

Microbes in the lab can mimic game theory scenarios, but whether these controlled environments accurately reflect what’s happening in nature is another story. “We set the dynamics of the game by assuming a certain kind of ecology,” Ratcliff said. But those parameters might not mirror the microbe’s normal habitat. “To show that the dynamics of an experiment conform to prisoner’s dilemma or other games doesn’t necessarily mean those mechanisms drive them in nature,” Ratcliff said.

This article was reprinted on ScientificAmerican.com.

View Reader Comments (21)

Leave a Comment

Reader CommentsLeave a Comment

  • I agree there may well be some differences.
    Let’s say both bats starve in the defection scenario — that way both have much more of an incentive to cooperate.
    Let’s say communities communicate about cooperative/defective behavior experienced; they’d quickly set everyone up against the defectors, ensuring only cooperation is rewarded (assuming their claims are deemed credible).
    So maybe the reason the given prisoner’s dilemma scenario doesn’t explain nature well is because it’s intentionally narrow-minded, intentionally focusing on a specific pay-off matrix with the intent to demonstrate incentive to defect.

  • The linearity and artificiality of game theoretical conclusions, and their inorganic
    assumptions have long impressed me. Particularly considering we now live in
    a world largely and painfully predicated on core game theory strategies from
    on high (e.g.: Wall Street, the Beltway), essentially unchanged since those
    happy days of nuclear armageddon.

  • Agree with Ratcliff’s last statement. The issue is considerably more complicated in humans than in bacteria, and even in bacteria one needs to consider how hostile the environment is. What is astonishing about most of the PD literature is how it claims to examine evolution but never mentions the environment. A hostile environment, as Dugatkin showed, selects for more cooperation. The free-living bacteria that under drought convictions form a colony that creates a stalk and spores are an example and they point to the next error, which is assuming a reward is always available no matter the actions of the players. This is not how nature works. If too few of the bacteria cooperate, no stalk is made, no spores are released, all of the bacteria have a fitness of zero. Similarly in humans there are many times when obtaining any reward requires N number of individuals to cooperate, and often that number is unknowable. Nine of us might kill that elephant, or it might be one or two or three too few to get it done resulting in nothing for all of us. Even with two partners, if you selfishly fail to cut off the monkey’s escape route he gets away and we both go hungry. Think I will go hunting with you again? Which brings up yet another issue; avoiding detection and the cost of being detected. PD assumes that the cost of defecting is limited to a partner picking defect in the next round. Some models allow partners to punish a player at a significant cost to themselves or to move to another partner, but even these fall well short of what we see in human groups. As described by Boehm in “Heirarchy in the Forest,” those whose selfish behavior is detected face collective punishment by the group, costing each group member very little, which ranges from social shunning to being murdered by one’s own family or abandoned and left alone by the group. The power in a group of cooperators belongs to the cooperators and not the defectors, as cooperators work together to thwart defectors but defectors by definition cannot gang up on cooperators in return. As PD examines interactions with two parties, if the cooperator is paired with a defector or extorter they have no one to cooperate with. But in a group they have plenty of cooperative partners while the selfish stand alone. This imbalance of power means that the opportunities to defect are extremely limited as one must avoid detection, a situation which favors cooperation as the dominant and more numerous strategy. Finally, in group social territorial species having and defending a territory is an all or nothing issue with N number required to keep neighbors from taking your land and killing everyone. Either all of you have land and lives or none of you have land and at the very least few men and children survive. So we see that fairly often the “reward” for defecting is actually not 3 or whatever number is randomly chosen, but instead it is nothing, or loss of social status, or it is death for the individual, or death for the individual and all their relatives.

  • Allow me to step out of the experiment for a bit. These “games” are usually framed to be played out by “persons” with few defining characteristics. People, on the other hand, as well as many animals experience live after birth in a sea of cooperation (many, not all). The rule is you cooperate/collaborate in family and you compete outside. So, human young are trained to “share” and other forms of cooperation for years before they are thrust into situations in which they must deal with outsiders independently. No matter how much preparation they are given (though fairly tales and other wisdom stories) they are primed to favor cooperation with people who are a lot like them. (“Others” get treated differently, but the definition of others is not well-defined.)

    So, is cooperation self-reinforcing because of this bias? Is selfishness?

  • Cooperation is a rational response to the interaction of organisms and their perceived environment

    While most approaches concluded that individuals interacting in single Prisoners’ Dilemma games should opt for the selfish option, empirical research reveals considerable amounts of cooperation.

    Subjective Expected Relative Similarity (SERS) is the only theory that explains and predicts the rationale of cooperation in single Prisoners Dilemma games. Published in the Journal of experimental Psychology (Fischer, 2009), SERS provides a simple decision rule: Cooperate whenever the (perceived) similarity with the opponent exceeds a similarity threshold that is derived from the expected payoffs of the interaction. In other words, individuals should cooperate whenever their opponent is sufficiently similar to themselves, given the specific consequences of the interaction. The SERS theory has been validated in several laboratory studies.

    Further extension of SERS into a strategy for repeated Prisoner’s Dilemma interactions resulted in the development of the Mimicry and Relative Similarity (MaRS) strategy. A manuscript published in the Proceedings of the National Academy of Sciences (Fischer, Frid, Goerg, Rubenstein, Levin and Selten, 2013) shows how fusing the principles of enacted and expected mimicry while conditioning their function on two similarity indices sheds light on the evolution of cooperation. Testing MaRS in computer simulations of behavioral niches, populated with agents that enact various strategies and learning algorithms, shows how MaRS outperforms all the opponent strategies it was tested against, pushes non-cooperative opponents toward extinction, and promotes the development of cooperative populations.

    Importantly, the SERS solution is not restricted to the Prisoners’ Dilemma game. It provides the rationale of cooperation for a family of games, termed Similarity Sensitive Games (SSGs, Fischer, 2012).


    Fischer, I. (2009). Friend or Foe: Subjective Expected Relative Similarity as a Determinant of Cooperation. Journal of Experimental Psychology – General.

    Fischer, I. (2012). Similarity or Reciprocity? On the Determinants of Cooperation in Similarity-Sensitive Games. Psychological Inquiry. 23,1, 48-54.

    Fischer, I., Frid, A., Goerg, S. J., Levin S. A., Rubenstein, D. I., & Selten, R., (2013). Fusing enacted and expected mimicry generates a winning strategy that promotes the evolution of cooperation. Proceedings of the National Academy of Sciences. Available online at: www.pnas.org/lookup/suppl/doi:10.1073/pnas.1308221110/-/DCSupplemental

  • Interesting article. I’d like to see more of this type of research done from a systems engineering perspective with regard to emergence and self-organized criticality within social systems. Taking into consideration not only game theory but also social contract theory. Meaning, at a macro level how the selfish/generous strategies effect other parts of the system.

    For example, when we’re using the sandpile analogy to describe economics we typically say the competitive forces within the system drive the system toward an attractor (creates the sandpile) until it reaches some critical value then those same forces are responsible for the avalanche. This is typically said to keep the system in balance and maintains the shape of the sandpile at that
    critical value. However, this only considers the competitive force.

    If we now consider the cooperative force acting within the sandpile we can say, that since it’s a better life style at the top of the pile, the agents at the top of the pile will tend to form cooperative strategies with other agents within the same stratosphere. They use a cooperative strategy here
    to help maintain their position within the sandpile and defend against an avalanche. Through this process classes emerge and become stratified throughout the sandpile. Agents will then also use competitive strategies to maintain their position in the pile and defend against other agents
    trying to displace them.

    This manipulation of the systems dynamics will cause a hampering effect which will then displace the entropy from the area of the manipulating area into other parts of the system. This then causes the sandpile to bulge in other areas causing natural distortions and/or depressions which then can cause the sandpile to become unstable. Therefore, further manipulation is required and we enter this endless cycle of manipulation and effect.

  • The author might have mentioned the author of the winning strategy in Axelroad’s tournament, who was Anatol Rapoport. And when Axelrod published the results and the winning algorithm, then held a second tournament, Anatol Rapoport won again, using the same strategy. Axelrod wrote an interesting book, “The Evolution of Cooperation,” on the two tournaments and his findings at the time.

  • “Certainly as a description of possible worlds it is quite interesting…” Well, I would like to know more about the (possible) applications of this type of game theory (of Plotkin and Hilbe) to the modal logic, the semantics of possible worlds of Kripke, etc. Could anybody help me?

  • So,

    Bat A shares, Bat B doesnt, the result is Bat B is full?
    Strange, because what about investment?
    Bat B goes hungry and moves to the status of Bat A doesnt share.


  • Interesting, but perhaps in nature environments are far more patchy than previously suspected. Maybe howling monkeys are not at risk some of the time. What would the outcomes be when the environment changes irregularly, not all together predictable way to either of the two players, say when multiple players are present only part of the time? What is the outcome when certain behaviors become fixed, while others mutate? Could this suggest that the matrix in the simple prisoner’s dilemma may be sometimes too small or under certain probabilities of behavior too large between trials?

  • I am a biologist but not a believer in strict genetics+natural selection determinism. In superior animals behavior does not rest on genetics alone, but is cultural as well. Moreover, reality is more complex than models.

    Social animals that raise the alarm upon seeing a predator, for instance, can be considered from various perspectives. To start with, it is wrong to assume that an animal raising the alarm puts itself at risk. Whether you take monkeys or other animals, those raising the alarm do so while on duty, standing guard on higher ground or higher up a tree. They are sentinels just or mostly keeping watch while other forage around on the ground.

    Also, an altruistic animal that indeed sacrifices itself can be favoured by natural selection, not just as a result of iterations of the behavior executed by random members of the group, but also genetically to the extent the behavior stems from the genotype. This is so because nexts of kin are more likely to carry the same than different gene alleles. Brothers, or parents and first-generation offspring for instance, share half their genes.

    Also to be considered is the role played by epigenetics and environmental factors. Life conditions such as food abundance or scarcity can leave epigenetic stamps that will determine same or differential gene expression in the offspring which can last generations. Environmental factors may change what is the best survival or gene transmission strategy. Therefore, epigenetics may be as relevant or even more than genetics in determining survival or gene transmission.

    One last comment. Cooperation vs selfishness in social animals can be considered from two perspectives. An animal’s way of cooperating with its group might consist in being selfish relative to another group. Let’s consider humans and two different, competing survival strategies. On one hand we have hunter-gatherers and on the other hand we have farmers. Hunter-gatherers may tend to be more cooperative than farmers, but farming can produce more food surplus not just per area unit, but also by the simple act of increasing the farmed area, so farming allows greater population growth while in the meantime, it eats away at hunter-gatherer habitats. This is an example of how internal cooperation goes hand-in-hand with external competition. Eventually, in any region where the soil and climate allow farming, farmers will make hunter-gatherers disappear. Than a new situation will develop, when the farmland cannot be increased, in which farming communities compete. Again internal cooperation combined with external competition will determine the outcome. Whatever genes are involved they must allow for both cooperation and competition, only according to group belonging. This I would guess, favours selfish or fighter genotypes that can be modulated by culture. Culture will most of time keep the lid on violence since people will spend most of time just with their group, but as soon as conflict situations arise, brought by contact with other groups or insider competition, carrying fighter genes that overcome the cultural lid put on them will become an advantage. This is also how social hierarchies are favored in farming communities, hierarchies that can get established by cooperation as much as by competition and violence potential.

  • altruistic genes can be passed recessively or trigger randomly in a brood.. as long as the random % is high enough those families have survival benefit.. both the ‘greedy ‘individuals and the ‘altruistic’ both share the ‘spectrum’ of possible behaviors.

  • Same old problem, totally artificial environments, totally artificial experiments, artificial constraints.. looks exactly like the old Game Theory. Maybe the real problem is that the designers don’t really understand real evolution.
    If we look at real natural systems cooperation is almost always beneficial. Behaviour that puts individuals into groups almost always confers a massive advantage, and this requires cooperation. This is seen from bacteria to insects to fish to horses to chimpanzees to humans. We humans conquered the world when we effectively had no language and were very little smarter than other animals, we were group hunters. Group (or pack) hunting predators are probably the most successful subset of animals in the world, and we are at the apex of that subset. Human behaviour depends on groups and in the group survival of the whole is far more important than survival of the individual. If you die your genes still get passed on through your siblings and other relatives. If the group dies you probably also die anyway.

    In reality in the prisoners dilemma the prisoner who is generous then gets betrayed will then seek revenge – the group behaviour of the whole prison population is to destroy or ostracize or kill betrayers. Group survival says that betrayers are a threat and must be eliminated. Its pretty basic logic..
    However liars and lying make the whole thing much more complicated. Basically a liar betrays then lies to get the victim to take the blame. Lying looks like a very successful strategy if it works- except for one thing, the group shares genetics. Lying spreads until there are more liars than victims and then the ecosystem destroys itself because the only strategy available is to betray.. In reality even this is not realistic because real prison is not an eco-system. In reality genetics are not the dominant component anyway. Each prisoner is controlled by culture, background, experience, personality, and current database (the other prisoners). In this system liars eventually get caught out and identified and are generally treated as if they are betrayers. – Note that a primary critical survival skill is/becomes the ability to spot a liar..

  • “The only way to win is not to play the game”.
    In other words, playing the game IS cooperating, and “winning” from an evolutionary standpoint, is to step outside the box ‘sometimes’ to expand the gene pool or avoid catastrophe that wipes out the pool.
    Any species is always in cooperation/competition with the niche that it develops simultaneously with. Sometimes, the niche changes and wipes out the mean Mean, leaving the fringe that is already adapted to something outside the mean hole of the niche.
    Civilization itself is “not playing the game” because it is an artificial niche that excludes the very risks that develop humans as animals. Cooperation inside civilization leads to an unnatural life form, and competition inside civilization is a false life form developed inside a bubble of energy and resources that are exploited from the uncivilized (or the poor).
    The human species can be considered a prisoner of the ‘prisoner’ in the natural prisoner’s dilemma because the collective group is evolving according to resources and risks acting upon the whole bubble. Civilization is an evolutionary trap: it’s beneficial for large numbers of humans, but the humans adapted to it are weak specimens unable to survive without the bubble of civilization’s protections and energies.
    The ‘new’ prisoner theory of the article seems to mix and match according to various rules, and while Robert Lucien Howe above points out the artificiality of the models, we also should consider that artificiality has its place. The key is to understand when the rules apply organically or naturally and when they apply artificially for a semi-controlled environment, and what the end game really is…
    “What are people FOR?”
    Survival isn’t the question in civilization: purposeful survival is. In nature, it’s easy: anything that is useful to its own future trends toward survival. Things that are destructive trend toward extinction. Civilization that acts competitively will trend toward consumption of its own future resources. People who ‘go along’ with the game will win only the game, not inclusion in a natural future.

  • This is very fascinating stuff, both from a computer science perspective and from a spiritual/social perspective.

    I would suggest that cooperation would still reign supreme based on intent of the individual. If the intent is mere selfish survival of the individual then I can see where generosity and and extortion might be somewhat balanced. However, if the intent is survival of the entire society, and especially offspring, even at the expenses of the individual then I’ll bet generosity would always prevail.

    “There is no greater love than to lay down your life for a friend.” – John 15:13

  • Well, I will tell you how cooperation works in humans. If you deceive society, society is going to make you pay for it. Cooperation is not generous, either you behave or you’ll pay the consequences, in one way or the other.

  • The degree of relatedness in bacteria leads to simple forms of altruism that are almost mechanical. What survives in a population is a replicating mechanism. A bacteria that every so many divides throws off a sacrificial replicant that has its spigots turned on to fix nitrogen or produce some other enzyme that allows the other reproducing copies to multiply more efficiently and more rapidly, wins over bacteria that do not do this and are less efficient. The boundaries for the values of the parameters for simple models that lead to such reproductive efficiency are easy to calculate. At least for a bacteria whose only job is to reproduce as quickly as possible to use up available resources, mechanisms for such altruism are obvious. Dyson is analyzing the wrong model. Multicellular organisms themselves evolved because of the efficiencies or specialization of function controlled by a central replication mechanism. Altruism at the cellular level is very close to the same thing.

  • The cooperation in those “prisoner’s dilemma” games comes about because in the real world, the games are not just played once, but numerous times. True, it may pay to “defect” in a one game situation, but once a person “defects”, his adversary, and the adversary’s friends and relatives will not give the individual the benefit of the doubt the next time the situation comes up.
    We see this in real life all the time. Businessmen are more honest, open, and friendly with customers than with transients they will deal with onl one time in their lives.

  • Bats don’t know what the heck they are doing – why on earth use game theory?
    DNA is what makes the universe alive with life of all kind.
    It is almost I indestructible but what makes it work are mistakes in replicating – every being/organism is unique and the traits that make up that unique organism decide if it is more fit than similar versions.

    There is no game theory involved here – just like there is none to be used on a bunch of rocks…..

  • The assumption that life forms are independent actors is faulty.. the genes are making decisions here, and each actor is a collection of genes. selfish behavior leads to propagation of your own genetic material, but cooperative behavior can propagate genes in other organisms. As long as an organism can recognize genetic similarity, it may be more profitable for the genes [not the organism itself] to sacrifice a little to profit a lot for a similar organism.

  • As one Wonko the Sane put it on Slashdot: ‘Why isn’t this headline, “Game Theory Called Into Question for Failing to Predict Observed Examples of Cooperation?”‘

Comments are closed.