UCB Psychology Humans Not Being Instinctively Selfish Questions

Custom Writing Services by World Class PhD Writers: High Quality Papers from Professional Writers

Best custom writing service you can rely on:

☝Cheap essays, research papers, dissertations.

✓14 Days Money Back Guerantee

✓100% Plagiarism FREE.

✓ 4-Hour Delivery

✓ Free bibliography page

✓ Free outline

✓ 200+ Certified ENL and ESL writers

✓  Original, fully referenced and formatted writing

UCB Psychology Humans Not Being Instinctively Selfish Questions
Sample Answer for UCB Psychology Humans Not Being Instinctively Selfish Questions Included After Question

UCB Psychology Humans Not Being Instinctively Selfish Questions

Description

Respond to ALL prompt questions:

Zaki & Mitchell (2013).

What evidence do Zaki and Mitchell review that suggests that humans are not instinctively selfish; but rather, are intuitively prosocial? Do you agree or disagree with this argument?

Warneken & Tomasello (2006).

What did Warneken and colleagues find when examining altruistic helping behavior in young children?

Rand & Nowak (2013).

What evidence do Rand and colleagues provide to support the five different mechanimss that may underlie human cooperation?
UCB Psychology Humans Not Being Instinctively Selfish Questions
A Sample Answer For the Assignment: UCB Psychology Humans Not Being Instinctively Selfish Questions
Title: UCB Psychology Humans Not Being Instinctively Selfish Questions

Review Feature Review Human cooperation David G. Rand1 and Martin A. Nowak2 1 Department of Psychology, Department of Economics, Program in Cognitive Science, School of Management, Yale University, New Haven, CT, USA 2 Program for Evolutionary Dynamics, Department of Mathematics, Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA, USA Why should you help a competitor? Why should you contribute to the public good if free riders reap the benefits of your generosity? Cooperation in a competitive world is a conundrum. Natural selection opposes the evolution of cooperation unless specific mechanisms are at work. Five such mechanisms have been proposed: direct reciprocity, indirect reciprocity, spatial selection, multilevel selection, and kin selection. Here we discuss empirical evidence from laboratory experiments and field studies of human interactions for each mechanism. We also consider cooperation in one-shot, anonymous interactions for which no mechanisms are apparent. We argue that this behavior reflects the overgeneralization of cooperative strategies learned in the context of direct and indirect reciprocity: we show that automatic, intuitive responses favor cooperative strategies that reciprocate. The challenge of cooperation In a cooperative (or social) dilemma, there is tension between what is good for the individual and what is good for the population. The population does best if individuals cooperate, but for each individual there is a temptation to defect. A simple definition of cooperation is that one individual pays a cost for another to receive a benefit. Cost and benefit are measured in terms of reproductive success, where reproduction can be cultural or genetic. Box 1 provides a more detailed definition based on game theory. Among cooperative dilemmas, the one most challenging for cooperation is the prisoner’s dilemma (PD; see Glossary), in which two players choose between cooperating and defecting; cooperation maximizes social welfare, but defection maximizes one’s own payoff regardless of the other’s choice. In a well-mixed population in which each individual is equally likely to interact and compete with every other individual, natural selection favors defection in the PD: why should you reduce your own fitness to increase that of a competitor in the struggle for survival? Defectors always out-earn cooperators, and in a population that contains both cooperators and defectors, the latter have higher fitness. Selection therefore reduces the abundance of cooperators until the population consists entirely of defectors. For cooperation to arise, a mechanism for the evolution of cooperation is needed. Such a mechanism is an interaction structure that can cause cooperation to be favored over Corresponding author: Nowak, M.A. (martin_nowak@harvard.edu). 1364-6613/$ – see front matter ß 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.tics.2013.06.003 defection [1]. These interaction structures specify how the individuals of a population interact to receive payoffs, and how they compete for reproduction. Previous work has identified five such mechanisms for the evolution of cooperation (Figure 1): direct reciprocity, indirect reciprocity, spatial selection, multilevel selection, and kin selection. It is important to distinguish between interaction patterns that are mechanisms for the evolution of cooperation and behaviors that require an evolutionary explanation (such as strong reciprocity, upstream reciprocity, and parochial altruism; Box 2). In this article, we build a bridge between theoretical work that has proposed these mechanisms and experimental work exploring how and when people actually cooperate. First we present evidence from experiments that implement each mechanism in the laboratory. Next we discuss why cooperation arises in some experimental settings in which no mechanisms are apparent. Finally, we consider the cognitive underpinnings of human cooperation. We show Glossary Evolutionary dynamics: mathematical formalization of the process of evolution whereby a population changes over time. Natural selection operates such that genotypes (or strategies) with higher fitness tend to become more common, whereas lower-fitness genotypes tend to die out. Mutation (re)introduces variation into the population. This process can also represent cultural evolution and social learning, in which people imitate those with higher payoffs and sometimes experiment with novel strategies. Evolutionary game theory: combination of game theory and evolutionary dynamics. There is a population of agents, each of whom has a strategy. These agents interact with each other and earn payoffs. Payoff is translated into fitness, and the frequency of strategies in the population changes over time accordingly: higher-payoff strategies tend to become more common, whereas lower-payoff strategies tend to die out. Game theory: mathematical formalization of social interaction and strategic behavior. A given interaction is represented by (i) a set of players, (ii) the choices available to each player, and (iii) the payoff earned by each player depending on both her choice and the choices of the other players. The prisoner’s dilemma is one such game that describes the problem of cooperation. Mechanism for the evolution of cooperation: interaction structure that can cause natural selection to favor cooperation over defection. The mechanism specifies how the individuals of a population interact to receive payoffs, and how they compete for reproduction. Prisoner’s dilemma: game involving two players, each of whom chooses between cooperation or defection. If both players cooperate, they earn more than if both defect. However, the highest payoff is earned by a defector whose partner cooperates, whereas the lowest payoff is earned by a cooperator whose partner defects. It is individually optimal to defect (regardless of the partner’s choice) but socially optimal to cooperate. Box 1 provides further details. Public goods game: prisoner’s dilemma with more than two players. In the public goods game, each player chooses how much money to keep for herself and how much to contribute to an account that benefits all group members. Trends in Cognitive Sciences, August 2013, Vol. 17, No. 8 413 Review Trends in Cognitive Sciences August 2013, Vol. 17, No. 8 Box 1. Defining cooperation Consider a game between two strategies, C and D, and the following payoff matrix (indicating the row player’s payoff): C D C R S D T P When does it make sense to call strategy C cooperation and strategy D defection? The following definition [163,164] is useful. The game is a cooperative dilemma if (i) two cooperators obtain a higher payoff than two defectors, R > P yet (ii) there is an incentive to defect. This incentive can arise in three different ways: (a) if T > R then it is better to defect when playing against a cooperator; (b) if P > S then it is better to defect when playing against a defector; and (c) if T > S then it is better to be the defector in an encounter between a cooperator and a defector. If at least one of these three conditions holds, then we have a cooperative dilemma. If none holds, then there is no dilemma and C is simply better than D. If all three conditions hold, we have a prisoner’s dilemma, T > R > P > S [6,48]. The prisoner’s dilemma is the most stringent cooperative dilemma. Here defectors dominate over cooperators. In a well-mixed population, natural selection always favors defectors over cooperators. For cooperation to arise in the prisoner’s dilemma, we need a mechanism for the evolution of cooperation. Cooperative dilemmas that are not the prisoner’s dilemma could be called relaxed cooperative dilemmas. In these games it is possible to evolve some level of cooperation even if no mechanism is at work. One such example is the snowdrift game, given by T > R > S > P. Here we find a stable equilibrium between cooperators and defectors, even in a well-mixed population. If 2R > T + S, then the total payoff for the population is maximized if everyone cooperates; otherwise a mixed population achieves the highest total payoff. This is possible even for the prisoner’s dilemma. The above definition can be generalized to more than two people (n-person games). We denote by Pi and Qi the payoffs for cooperators and defectors, respectively, in groups that contain i cooperators and n–i defectors. For the game to be a cooperative dilemma, we require that (i) an all-cooperator group obtains a higher payoff then an alldefector group, Pn > Q0, yet (ii) there is some incentive to defect. The incentive to defect can take the following form: (a) Pi < Qi–1 for i = 1, . . ., n and (b) Pi < Qi for i = 1, . . ., n 1. Condition (a) means that an individual can increase his payoff by switching from cooperation to defection. Condition (b) means that in any mixed group, defectors have a higher payoff than cooperators. If only some of these incentives hold, than we have a relaxed cooperative dilemma. In this case some evolution of cooperation is possible even without a specific mechanism. However, a mechanism would typically enhance the evolution of cooperation by increasing the equilibrium abundance of cooperators, increasing the fixation probability of cooperators or reducing the invasion barrier that needs to be overcome. The volunteer’s dilemma is an example of a relaxed situation [165]. If all incentives hold, we have the n-person equivalent of a prisoner’s dilemma, called the public goods game (PGG) [63], and a mechanism for evolution of cooperation is needed. that intuitive, automatic processes implement cooperative strategies that reciprocate, and that these intuitions are affected by prior experience. We argue that these results support a key role for direct and indirect reciprocity in human cooperation, and emphasize the importance of culture and learning. Direct reciprocity Indirect reciprocity Spaal selecon Mul-level selecon Kin selecon r TRENDS in Cognitive Sciences Figure 1. The five mechanisms for the evolution of cooperation. Direct reciprocity operates when two individuals interact repeatedly: it pays to cooperate today to earn your partner’s cooperation in the future. Indirect reciprocity involves reputation, whereby my actions towards you also depend on your previous behavior towards others. Spatial selection entails local interaction and competition, leading to clusters of cooperators. Multilevel selection occurs when competition exists between groups and between individuals. Kin selection arises when there is conditional behavior according to kin recognition. 414 Five mechanisms Direct reciprocity Direct reciprocity arises if there are repeated encounters between the same two individuals [2–5]. Because they interact repeatedly, these individuals can use conditional strategies whereby behavior depends on previous outcomes. Direct reciprocity allows the evolution of cooperation if the probability of another interaction is sufficiently high [6]. Under this ‘shadow of the future’, I may pay the cost of cooperation today to earn your reciprocal cooperation tomorrow. The repeated game can occur with players making simultaneous decisions in each round or taking turns [7]. Successful strategies for the simultaneous repeated PD include tit-for-tat (TFT), a strategy that copies the opponent’s previous move, and win–stay lose–shift, a strategy that switches its action after experiencing exploitation or mutual defection [8]. TFT is an excellent catalyst for the emergence of cooperation, but when errors are possible it is quickly replaced by strategies that sometimes cooperate even when the opponent defects (e.g., Generous TFT) [9]. Indirect reciprocity Indirect reciprocity operates if there are repeated encounters within a population and third parties observe some of these encounters or find out about them. Information about Review Box 2. Behavioral patterns versus mechanisms for the evolution of cooperation It is important to distinguish mechanisms for the evolution of cooperation from behavioral patterns that are not themselves mechanisms. Three examples are upstream reciprocity, strong reciprocity, and parochial altruism. Upstream (or generalized) reciprocity refers to the phenomenon of paying it forward, by which an individual who has just received help is more likely to help others in turn. Strong reciprocity refers to individuals who reward cooperation and punish selfishness, even in anonymous interactions with no promise of future benefits. Parochial altruism (or ingroup bias) describes the behavior whereby people are more likely to help members of their own group than members of other groups. None of these concepts explains the evolution of cooperation: adding one or more of these elements to a prisoner’s dilemma will not cause selection to favor cooperation. Instead, these concepts are descriptions of behavior that require an evolutionary explanation. Group selection, spatial structure, or some chance of direct or indirect reciprocity can lead to the evolution of upstream reciprocity [166,167], strong reciprocity [13,39,168], and parochial altruism [122,139,169–171]. such encounters can spread through communication, affecting the reputations of the participants. Individuals can thus adopt conditional strategies that base their decision on the reputation of the recipient [10,11]. My behavior towards you depends on what you have done to me and to others. Cooperation is costly but leads to the reputation of being a helpful individual, and therefore may increase your chances of receiving help from others. A strategy for indirect reciprocity consists of a social norm and an action rule [12–14]. The social norm specifies how reputations are updated according to interactions between individuals. The action rule specifies whether or not to cooperate given the available information about the other individual. Indirect reciprocity enables the evolution of cooperation if the probability of knowing someone’s reputation is sufficiently high. Spatial selection Spatial selection can favor cooperation without the need for strategic complexity [15,16]. When populations are structured rather than randomly mixed, behaviors need not be conditional on previous outcomes. Because individuals interact with those near them, cooperators can form clusters that prevail, even if surrounded by defectors. The fundamental idea is that clustering creates assortment whereby cooperators are more likely to interact with other cooperators. Therefore, cooperators can earn higher payoffs than defectors. More generally, population structure affects the outcome of the evolutionary process, and some population structures can lead to the evolution of cooperation [17,18]. Population structure specifies who interacts with whom to earn payoffs and who competes with whom for reproduction. The latter can be genetic or cultural. Population structure can represent geographic distribution [19,20] or social networks [21], and can be static [22– 24] or dynamic [21,25–29]. Population structure can also be implemented through tag-based cooperation, in which interaction and cooperation are determined by arbitrary tags or markers [30–32]. In this case, clustering is not literally spatial but instead occurs in the space of phenotypes [30]. Trends in Cognitive Sciences August 2013, Vol. 17, No. 8 Multilevel selection Multilevel selection operates if, in addition to competition between individuals in a group, there is also competition between groups [33–39]. It is possible that defectors win within groups, but that groups of cooperators outcompete groups of defectors. Overall, such a process can result in the selection of cooperators. Darwin wrote in 1871: ‘There can be no doubt that a tribe including many members who . . . were always ready to give aid to each other and to sacrifice themselves for the common good, would be victorious over other tribes; and this would be natural selection.’ [40]. Kin selection Kin selection can be seen as a mechanism for the evolution of cooperation if properly formulated. In our opinion, kin selection operates if there is conditional behavior based on kin recognition: an individual recognizes kin and behaves accordingly. As J.B.S. Haldane reportedly said, ‘I will jump into the river to save two brothers or eight cousins’ [41]. Much of the current literature on kin selection, however, does not adhere to this simple definition based on kin recognition. Instead, kin selection is linked to the concept of inclusive fitness [42]. Inclusive fitness is a particular mathematical method to account for fitness effects. It assumes that personal fitness can be written as a sum of additive components caused by individual actions. Inclusive fitness works in special cases, but makes strong assumptions that prevent it from being a general concept [43]. A straightforward mathematical formulation describing the evolutionary dynamics of strategies or alleles without the detour of inclusive fitness is a more universal and more meaningful approach. This position critical of inclusive fitness, which is based on a careful mathematical analysis of evolution [43], has been challenged by proponents of inclusive fitness [44], but without considering the underlying mathematical results [45]. In our opinion, a clear understanding of kin selection can only emerge once the intrinsic limitations of inclusive fitness are widely recognized. Meanwhile, it is useful to remember that no phenomenon in evolutionary biology requires an inclusive fitness-based analysis [43]. Interactions between mechanisms Each of these mechanisms applies to human cooperation. Over the course of human evolution, it is likely that they were (and are) all in effect to varying degrees. Although each mechanism has traditionally been studied in isolation, it is important to consider the interplay between them. In particular, when discussing the evolution of any prosocial behavior in humans, we cannot exclude direct and indirect reciprocity. Early human societies were small, and repetition and reputation were always in play. Even in the modern world, most of our crucial interactions are repeated, such as those with our coworkers, friends, and family. Thus, spatial structure, group selection, and kin selection should be considered in the context of their interactions with direct and indirect reciprocity. Surprising dynamics can arise when mechanisms are combined. For example, direct reciprocity and spatial structure can interact either synergistically or antagonistically, depending on the levels of repetition and assortment [46]. Further 415 Review Experimental evidence in support of the five mechanisms Theoretical work provides deep insights into the evolution of human cooperation. Evolutionary game theory allows us to explore what evolutionary trajectories are possible and what conditions may give rise to cooperation. To investigate how cooperation among humans in particular arises and is maintained, theory must be complemented with empirical data from experiments [47]. Theory suggests what to measure and how to interpret it. Experiments illuminate human cooperation in two different ways: by examining what happens when particular interaction structures are imposed on human subjects, and by revealing the human psychology shaped by mechanisms that operate outside of the laboratory (Box 3). We now present both types of experimental evidence. First we describe experiments designed to test each of the mechanisms for the evolution of cooperation in the laboratory. We then discuss the insights gained from cooperation in one-shot anonymous experiments. For comparability with theory, we focus on experiments that study cooperation using game theoretic frameworks. Most of these experiments are incentivized: the payout people receive depends on their earnings in the game. Subjects are told the true rules of the game and deception is prohibited: to explore the effect of different rules on cooperation, subjects must believe that the rules really apply. Finally, interactions are typically anonymous, often occurring via computer terminals or over the internet. This anonymity reduces concerns about reputational effects outside of the laboratory, creating a baseline from which to measure the effect of adding more complicated interaction structures. Box 3. How behavioral experiments inform evolutionary models Experiments shed light on human cooperation in different ways [47]. One type of experiment seeks to recreate the rules of interaction prescribed by a given model. By allowing human subjects to play the game accordingly, researchers test the effect of adding human psychology. Do human agents respond to the interaction rules similarly to the agents in the models? Or are important elements of proximate human psychology missing from the models, revealing new questions for evolutionary game theorists to answer? Other studies explore behavior in experiments in which no mechanisms that promote cooperation are present (e.g., one-shot anonymous games in well-mixed populations). By examining play in these artificial settings, we hope to expose elements of human psychology and cognition that would ordinarily be unobservable. For example, in repeated games, it can be self-interested to cooperate. When we observe people who cooperate in repeated games, we cannot tell if they have a predisposition towards cooperation or are just rational selfish maximizers. One-shot anonymous games are required to reveal social preferences. The artificiality of these laboratory experiments is therefore not a flaw, but can make such experiments valuable. It is critical, however, to bear this artificiality in mind when interpreting the results: these experiments are useful because of what they reveal about the psychology produced by the outside world, rather than themselves being a good representation of that world. 416 Direct reciprocity Over half a century of experiments [48] demonstrate the power of repetition in promoting cooperation. Across many experiments using repeated PDs, people usually learn to cooperate more when the probability of future interaction is higher [49–55] (in these games, there is typically a constant probability that a given pair of subjects will play another round of PD together). Repetition continues to support cooperation even if errors are added (the computer sometimes switches a player’s move to the opposite of what she intended) [55], which is consistent with theoretical results [9,56]. More quantitatively, theoretical work using stochastic evolutionary game theory (modeling that incorporates randomness and chance) finds that cooperation will be favored by selection if TFT earns a higher payoff than the strategy Always Defect (ALLD) in a population in which the two strategies are equally common (when TFT is risk-dominant over ALLD) [57]. More generally, as the payoff for TFT relative to ALLD in such a mixed population increases, so too does the predicted frequency of cooperation. Here we show that this prediction does an excellent job of organizing the experimental data: across 14 conditions from four papers, the fraction of cooperators is predicted with R2 = 0.81 by the extent to which the probability of future interaction exceeds the risk dominance threshold (Figure 2). This is one of numerous situations in which stochastic evolutionary game theory [57] successfully describes observed human behavior [58–61]. 0.9 First period cooperaon exploration of the interactions between mechanisms is a promising direction for future research. Trends in Cognitive Sciences August 2013, Vol. 17, No. 8 0.8 0.7 0.6 Key: Dal Bo [53] Dreber et al. [54] Dal Bo and Fréchee [52] Rand et al. [SSRN] 0.5 0.4 0.3 0.2 0.1 0 –0.4 –0.3 –0.2 –0.1 0 0.1 0.2 0.3 0.4 0.5 ‘Shadow of the future’: w – (T + P – S – R)/(T – S) TRENDS in Cognitive Sciences Figure 2. Repetition promotes cooperation in the laboratory. The frequency of cooperative strategies in various repeated prisoner’s dilemma (PD) experiments is plotted as a function of the extent to which future consequences exist for actions in the current period. Specifically, the x-axis shows the amount by which the continuation probability w (probability that two subjects play another PD round together) exceeds the critical payoff threshold (T + P – S – R)/(T – S) necessary for tit-for-tat (TFT) to risk-dominate always defect (ALLD). In a population that is 1/2 TFT and 1/2 ALLD, w < (T + P – S – R)/(T – S) means that ALLD earns more than TFT; w = (T + P – S – R)/(T – S) means that TFT and ALLD do equally well; and the more w exceeds (T + P – S – R)/(T – S), the more TFT earns compared to ALLD. The y-axis indicates the probability of cooperation in the first round of each repeated PD game (cooperation in the first period is a pure reflection of one’s own strategy, whereas play in later periods is influenced by the partner’s strategy as well). Data are from [52–54] and [Rand, D.G., et al. (2013) It’s the thought that counts: the role of intentions in reciprocal altruism, http://ssrn.com/abstract=2259407]. For maximal comparability, we do not include the treatments from [54] with costly punishment, or the treatments from Rand et al. (http://ssrn.com/abstract=2259407) with exogenously imposed errors. Owing to variations in experimental design, subjects in different experiments had differing lengths of time to learn. Nonetheless, a clear increasing relationship is evident, both within each study and over all studies. The trend line shown is given by y = 0.93x + 0.40, with R2 = 0.81. Review Repetition promotes cooperation in dyadic interactions. The situation is more complicated, however, if groups of players interact repeatedly [62]. Such group cooperation is studied in the context of the public goods game (PGG) [63], an n-player PD. The PGG is typically implemented by giving each of n players an endowment and having them choose how much to keep for themselves and how much to contribute to the group. All contributions are multiplied by some constant r (1 < r < n) and split equally by all group members. The key difference from the two-player PD is that in the PGG, targeted interactions are not possible: if one player contributes a large amount while another contributes little, a third group member cannot selectively reward the former and punish the latter. The third player can choose either a high contribution, rewarding both players, or a low contribution, punishing both. Thus, although direct reciprocity can in theory stabilize cooperation in multiplayer games, this stability is fragile and can be undermined by errors or a small fraction of defectors [64]. As a result, cooperation almost always fails in repeated PGGs in the laboratory [65–67]. Does this mean that mechanisms other than direct reciprocity are needed to explain group cooperation? The answer is no. We must only realize that group interactions do not occur in a vacuum, but rather are superimposed on a network of dyadic personal relationships. These personal, pairwise relationships allow for the targeted reciprocity that is missing in the PGG, giving us the power to enforce group-level cooperation. They can be represented by adding pairwise reward or punishment opportunities to the PGG. (Box 4 discusses costly punishment in repeated twoplayer games). After each PGG round, subjects can pay to increase or decrease the payoff of other group members according to their contributions. Thus, the possibility of targeted interaction is reintroduced, and direct reciprocity can once again function to promote cooperation. Numerous laboratory experiments demonstrate that pairwise reward and punishment are both effective in promoting cooperation in the repeated PGG [65–70]. Naturally, given that both implementations of direct Box 4. Tit-for-tat versus costly punishment The essence of direct reciprocity is that future consequences exist for present behavior: if you do not cooperate with me today, I will not cooperate with you tomorrow. This form of punishment, practiced by TFT in pairwise interactions, via denial of future reward is different from costly punishment; in the latter case, rather than just defecting against you tomorrow, I actually pay a cost to impose a cost on you [54,65–67,84,172–175]. The following question therefore arises: what is the role of costly punishment in the context of repeated pairwise interactions? A set of behavioral experiments revealed that costly punishing in the repeated PD was disadvantageous, with punishers earning lower payoffs than non-punishers. This was because punishment led to retaliation much more often than to reconciliation [54]. Complementing these observations are evolutionary simulations that revealed similar results: across a wide range of parameter values, selection disfavors the use of costly punishment in the repeated PD [61]. Similar results were found in an evolutionary model based on group selection [176]: even a minimal amount of repetition in which a second punishment stage is added causes selection to disfavor both punishment and cooperation because of retaliation. Trends in Cognitive Sciences August 2013, Vol. 17, No. 8 reciprocity promote cooperation, higher payoffs are achieved when using reward (which creates benefit) than punishment (which destroys it). Rewarding also avoids vendettas [54,71] and the possibility of antisocial punishment, whereby low contributors pay to punish high contributors. It has been demonstrated that antisocial punishment occurs in cross-cultural laboratory experiments [72–74] and can prevent the evolution of cooperation in theoretical models [75–78]. These cross-cultural experiments add a note of caution to previous studies on punishment and reward in the PGG: targeted interactions can only support cooperation if they are used properly. Antisocial punishment undermines cooperation, as does rewarding of low contributors [Ellingsen, T. et al. (2012) Civic capital in two cultures: the nature of cooperation in Romania and USA, http://ssrn.com/abstract=2179575]. With repetition and the addition of pairwise interactions, cooperation can be a robust equilibrium in the PGG, but populations can nonetheless become stuck in other, less efficient equilibria or fail to equilibrate at all. Taken together, the many experiments exploring the linking of dyadic and multiplayer repeated games demonstrate the power of direct reciprocity for promoting largescale cooperation. Interestingly, this linking also involves indirect reciprocity: if I punish a low contributor, then I reciprocate a harm done to me (direct reciprocity) as well as a harm done to other group members (indirect reciprocity [79]). Further development of theoretical models analyzing linked games is an important direction for future research, as is exploring the interplay between direct and indirect reciprocity in such settings. Indirect reciprocity Indirect reciprocity is a powerful mechanism for promoting cooperation among subjects who are not necessarily engaged in pairwise repeated interactions. To study indirect reciprocity in the laboratory, subjects typically play with randomly matched partners and are informed about their choices in previous interactions with others [80,81]. Most subjects condition their behavior on this information: those who have been cooperative previously, particularly towards partners who have behaved well themselves, tend to receive more cooperation [80–

error: Not Allowed