"چند سیب بد طول می کشد تا تمام بشکه تمام بجشود؟": طرد اجتماعی و تحمل برای سیب بد
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|30825||2009||11 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Experimental Social Psychology, Volume 45, Issue 4, July 2009, Pages 603–613
In social dilemmas, where personal welfare is in conflict with collective welfare, there are inherent incentives to act non-cooperatively. Moreover, there is evidence that the example of a few uncooperative group members (“bad apples”) is more influential than the example of comparable numbers of cooperative members (a bad apple effect). Two studies are reported that examine the functional relationship between the number of likely bad apples and individual cooperation, and whether and when the threat of social exclusion for uncooperative behavior may effectively counter the temptation to follow the example of such “bad apples”. It is shown that (a) the threat of exclusion is sufficient to counter the temptation to follow a few bad apples’ example, (b) such threats cannot, however, overcome the cooperation-degrading effects of large numbers (e.g., a majority) of bad apples, and (c) the effectiveness of such threats may be greater in relatively smaller groups.
“Wel bet is roten appul out of hoord; Than that it rotie al the remenaunt [Better take the rotten apple from the hoard, Than to let it lie to spoil the good ones there].” Cook’s Tale, The Canterbury Tales, G. Chaucer (1380) “The rotten apple spoils his companion.” Poor Richard’s Almanac, B. Franklin (1733) Such proverbs reflect a rather general principle in social psychology—bad information about another person appears to have a stronger effect on our impressions, evaluations, and reactions to that person than equivalently extreme good information (e.g., Baumeister et al., 2001 and Skowronski and Carlston, 1989). In this paper we are interested in the application of this principle within social dilemmas, settings in which personal and collective welfare are in conflict (Dawes, 1980, Komorita and Parks, 1999 and Messick and Brewer, 1983) and where “bad” (i.e., uncooperative) behavior is always more immediately and personally rewarding than “good” (i.e., cooperative) behavior. We are particularly interested in (1) how the uncooperative (“bad”) behavior of just one or more group members may substantially reduce others’ willingness to act cooperatively—or, as the proverb goes, whether “one bad apple spoils the whole barrel” – and (2) what might be done to prevent this—i.e., how and when can group members be effectively deterred from following the bad example of bad apples? There is by now considerable and convincing evidence that group members’ behavior within a social dilemma is influenced by both expectations and observations of others’ behavior (e.g., Bornstein and Ben-Yossef, 1994, Braver and Barnett, 1974, Dawes et al., 1977, Komorita et al., 1992, Messick et al., 1983, Schroeder et al., 1983 and Yamagishi and Sato, 1986). With some notable exceptions (e.g., a strong competitor’s tendency to exploit uniformly cooperative others, e.g., Kelley & Stahelski, 1970), the more cooperative others in the group are (or are expected to be), the more cooperative we tend to be. However, the exact nature of the relationship between others’ and our own behavior has not been well established. Our current focus is on identifying moderating factors or boundary conditions for this relationship. One obvious moderation question is whether uncooperative group members have greater impact on our behavior than equally-extreme cooperative ones. Following Colman, 1982 and Ouwerkerk et al., 2005, we will refer to this as the bad apple effect. A number of scholars ( Colman, 1982, Colman, 1995, Marwell and Schmitt, 1972 and Sugden, 1984) have proposed such an effect and there is some indirect but supportive empirical evidence. For example, it has been widely observed (e.g., Andreoni, 1995, Ledyard, 1995 and Pruitt and Kimmel, 1977) that with repeated play in a social dilemma, the mean rate of cooperation tends to decline. The notion that the relatively less cooperative members of the group have more impact on the group’s behavioral norm than the relatively more cooperative members is quite consistent with a bad apple effect. Also, it has been reported ( Messick et al., 1983 and Rutte and Wilke, 1984) that providing a relatively wide distribution of false harvesting feedback in a resource-conservation dilemma leads to faster depletion of the shared resource than feedback with a narrow distribution and the same mean, just as one would expect if the extremely low cooperator had greater relative impact on others’ behavior. Finally, a recent, unpublished set of studies by ( Ouwerkerk, Van Lange, Gallucci, & Van Vugt, in preparation; also see Ouwerkerk et al., 2005) reports that participants were more inclined to follow the bad example of a single, relatively non-cooperative person (a bad apple) in a social dilemma than the good example of a single relatively cooperative person. The proverb with which we began this paper, “one bad apple spoils the whole barrel”, makes an even stronger claim. Not only may – as Baumeister et al. (2001) suggest – bad be stronger than good (a bad apple effect), but even a single bad model may also be sufficient to make the rest of the group act badly (a one-bad-apple effect). This possibility is supported by Kurzban, McCabe, Smith, & Wilson’s (2001) findings. Using a real-time five-person game, in which participants received continuous and veridical feedback on others’ current contribution decisions, they found evidence that group members strived to contribute at or slightly above the level of the person making the lowest contribution in the group (a minimal reciprocity rule; cf. Sugden, 1984). More direct evidence comes from a study summarized in a chapter by Rutte and Wilke (1992). They asked members of five-person groups to play a dichotomous-choice NPD game, and to begin by stating a non-binding intention. Via false feedback, they then manipulated what the other four people in the group allegedly intended to do. They had five conditions: either 0, 1, 2, 3, or all 4 of the others purportedly intended to defect. After receiving this feedback, all participants make their final and binding choices. These choices are reproduced in the solid curve of Fig. 1. As the figure shows, the function relating the number of bad apples with cooperation rate was a step-function. A single bad apple lowered the cooperation rate from about 50% to about 20%, and there was no further change in cooperation as the number of bad apples increased. Full-size image (16 K) Fig. 1. Cooperation as a function of the number of “bad apples” in Rutte and Wilke (1992). Figure options If such a one-bad-apple effect can be well established empirically (one of our present goals), it has rather disturbing implications for cooperation in human groups – namely, groups may be very vulnerable to the effects of a few or even a single uncooperative model. Social dilemmas, by definition, present clear and often strong incentives to act uncooperatively. There appears to be a non-trivial fraction of the population that will nearly always defect (Fischbacher et al., 2001 and Kurzban and Houser, 2001). It would be surprising then if most groups (especially large groups; Colman, 1995) did not include at least a few uncooperative “bad apples.” Given that it is likely that there will usually be some bad apples in any group, how can groups – or at least those groups where the presence of bad apples can be detected—successfully solve social dilemmas? More generally, how did humans solve the evolutionary problem of sociality—i.e., evolve into a species that can routinely solve social dilemmas? One generic solution to the one-bad-apple problem has been identified by evolutionary game theorists (Boyd and Richerson, 1992, Gintis et al., 2003 and Henrich et al., 2001; Hirshleifer & Rasmussen, 1989; Kameda et al., 2003, Price et al., 2002 and Sugden, 1984). It requires that two conditions be met: that humans (1) act reciprocally (i.e., cooperate and defect in response to others’ actions), and (2) punish non-cooperators. Most of these models, however, are vague about the form such punishment might take. For example, in developing an evolutionary model to explain communal sharing among primitive humans, Kameda et al. (2003) refer to fighting as a way of punishing hunters who refuse to share what they kill or even those who refuse to inflict such punishments. In their laboratory work, Fehr and Gächter (2002) permitted punishment of defectors via fines. In their survey research, Price et al. (2002) assess willingness to punish defectors (e.g., draft dodgers at a time of war) via legal sanctions (e.g., prison). As you might expect in early and primarily theoretical efforts, such punishment is considered abstractly and any punishment (of equal severity or disutility) is assumed to be as effective as any other. These models do, however, suggest that such punishment will not deter widespread defection unless the punishment is costly enough for defectors and the costs of imposing the punishments not too great for the punishers (cf. Yamagishi, 1986). If these models are correct, then a central issue in human cooperation is whether, when and how groups effectively punish uncooperative defectors. A primary objective of this paper is to examine the effectiveness of one particular type of punishment—social marginalization, ostracism (Williams, 2001), or exclusion from the group. We ask whether group members will resist the apparently strong temptation to follow the example of a few bad apples if by failing to do so, they risk social exclusion/marginalization in their group. It is important to note that the studies suggesting that a single bad apple can “spoil the barrel” (e.g., Kurzban et al., 2001 and Rutte and Wilke, 1992), like nearly all experimental social dilemma studies, minimized the possibility of group members being able to punish one another (e.g., choices were anonymous and/or group members were led to believe that they could not interact with one another following the study). Social psychological interest in the general effects of social exclusion (and its many variations—ostracism, rejection, bullying) has grown dramatically in the last few years (see Abrams et al., 2005, Williams, 2007 and Williams et al., 2005, for excellent overviews). Much of this work demonstrates that social exclusion or rejection is highly aversive (Baumeister and Tice, 1990 and Williams, 2001), or conversely, that there is a strong human need to belong or be included (Baumeister & Leary, 1995). In his work on ostracism, for example, Williams (e.g., 2001) has shown that being socially ostracized frustrates several core human needs – the need to belong, the need to feel in control of one’s world, the need to maintain high self esteem, and even the need to believe that one actually exists. Williams and his colleagues have shown that (a) ostracism activates the same brain regions as experiencing physical pain (Eisenberger, Lieberman, & Williams, 2003; also cf. MacDonald, Kingsbury, & Shaw, 2005; but see Twenge, Catanese, & Baumeister, 2003, for an alternative view) and (b) that this aversive response appears to be rather automatic (e.g., is not mediated by target personality, nature of the source of ostracism; whether the ostracism is real or imagined; Williams, 2007 and Williams and Zadro, 2005). Of particular interest for us, it has been demonstrated that those who have been ostracized may alter their perception, memory, and overt behavior to try to reconnect with the group (e.g., they conform more, Williams, Cheung, & Choi, 2000, Experiment 2; they may work harder at a group task, Williams & Sommer, 1997; they may better recall socially relevant events, Gardner, Pickett, & Brewer, 2000, and Gardner, Pickett, Jefferis, & Knowles, 2005; they may become more sensitive to social cues, Gardner et al., 2005 and Pickett and Gardner, 2005) or with new potential sources of affiliation (Maner, DeWall, Baumeister, & Schaller, 2007).1 Classic social psychological theory (Festinger, 1951 and Festinger, 1954) and research (cf. Levine, 1989 and Schacter, 1951) likewise suggests that those who deviate too extremely or consistently from group norms risk rejection from the group (cf. Levine & Kerr, 2007). Collectively, these varied lines of research converge by suggesting that the threat of social exclusion may be a particularly effective means of countering the effect of bad apples in social dilemmas. There is now more direct empirical evidence to support this general conjecture within social dilemmas, per se (e.g., Cinyabuguma, Page, & Putterman, 2005; Masclet, Noussair, Tucker, & Villeval, 2003). For example, Kerr, 1999a and Kerr, 1999b reported evidence of more cooperation in a social dilemma by group members who believed that they could be excluded from future game play of their group by a vote of their fellow group members, compared to control subjects for whom such exclusion was determined by a random choice by the experimenter. Similarly, Ouwerkerk et al. (in preparation) have found that their bad apple effect could actually be reversed by a threat of exclusion from the group. In another study, Ouwerkerk et al. (in preparation) found that a brief and temporary exclusion from one’s group significantly increased subsequent cooperation even when the group contained a bad apple. To summarize, prior work suggests that in social dilemmas bad examples are more influential than good examples (the bad apple effect) and that a credible threat of exclusion may eliminate or even reverse this effect. In the present paper, we (a) systematically vary the number of bad apples in the group to determine just how many bad apples it takes to “spoil the barrel” (i.e., to produce a sharp drop in cooperation in the group) without a threat of social exclusion, and then (b) determine whether introducing a viable threat of social exclusion would alter this relationship. Our prior research (Kerr, 1999a, Ouwerkerk et al., 2005 and Ouwerkerk et al., in preparation) led us to hypothesize that the sufficient number of bad apples needed to produce a sharp drop in cooperation would indeed be moderated by a threat of exclusion, such that it would take more bad apples to tempt a group member to defect if there were a credible threat that the group could exclude uncooperative members. How many more is an open question which we explore in Experiment 1. One plausible version of this moderation effect is depicted in the fainter, dashed curve (“With Exclusion?”) in Fig. 1. It suggests that in Rutte and Wilke’s (1992) five-person groups, those under a threat of social exclusion would not follow the example of a single or even a pair of bad apples, but would only begin to act more competitively if there were three or more (i.e., a majority of) bad apples in the group. In Experiment 1 we, like all prior investigators, examined these question in relatively small groups (namely, five-person groups), where group members are typically easy to recognize and sanction. In Experiment 2, we examine larger groups, where the social dynamics might be quite different (e.g., in larger groups it is typically more difficult to monitor, recognize, and sanction members and group identification may also be weaker; Caporael, 1997).