مزیت زن در تشخیص حالت عاطفی چهره: آزمون یک فرضیه تکاملی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37677||2006||16 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Evolution and Human Behavior, Volume 27, Issue 6, November 2006, Pages 401–416
Abstract A set of computerized tasks was used to investigate sex differences in the speed and accuracy of emotion recognition in 62 men and women of reproductive age. Evolutionary theories have posited that female superiority in the perception of emotion might arise from women's near-universal responsibility for child-rearing. Two variants of the child-rearing hypothesis predict either across-the-board female superiority in the discrimination of emotional expressions (“attachment promotion” hypothesis) or a female superiority that is restricted to expressions of negative emotion (“fitness threat” hypothesis). Therefore, we sought to evaluate whether the expression of the sex difference is influenced by the valence of the emotional signal (Positive or Negative). The results showed that women were faster than men at recognizing both positive and negative emotions from facial cues, supporting the attachment promotion hypothesis. Support for the fitness threat hypothesis also was found, in that the sex difference was accentuated for negative emotions. There was no evidence that the female superiority was learned through previous childcare experience or that it was derived from a sex difference in simple perceptual speed. The results suggest that evolved mechanisms, not domain-general learning, underlie the sex difference in recognition of facial emotions.
Introduction The ability to decode facial expressions of emotion is fundamental to human social interaction. Elements of facial decoding, including the immediate preverbal detection of a facial signal, are believed to represent evolved mechanisms that enable the receiver to predict another individual's emotional state and anticipate future actions (Ekman, 1997, Izard, 1994 and Russell et al., 2003). Ekman and others (Ekman, 1994, Ekman & Friesen, 1971 and Izard, 1994 have argued that a limited set of facial expressions is innate and universally recognized as signals for happiness, sadness, anger, fear, disgust, and surprise. While the verbal labels and cultural rules governing the expression of these emotions may vary, the expressions themselves have a universal signal value. Thus, both the production of specific facial expressions and their interpretation by a receiver are thought to be innate. It is often claimed that women are superior to men at recognizing facial expressions of emotion (see below). Explanations for the sex difference range from sexual inequalities in power and social status (e.g., see Hall, 1984, Henley, 1977 and Weitz, 1974) to evolutionary perspectives based on women's near-universal responsibility for child-rearing (e.g., Babchuk, Hames, & Thompson, 1985). The primary caretaker hypothesis proposed by Babchuk et al. (1985) contends that females, as a result of their evolutionary role as primary caretakers, will display evolved adaptations that enhance the probability of survival of their offspring. In humans, these adaptations are hypothesized to include the fast and accurate decoding of facial affect, an important means of communication especially in preverbal infants. The child-rearing hypothesis is more complex than it first appears. It gives rise to two different predictions. According to one interpretation of the theory, the “attachment promotion” hypothesis, women should display across-the-board superiority, relative to men, in decoding all facial expressions of emotion because mothers who are highly responsive to infants' cries, smiles, and other nonverbal signals are likely to produce securely attached infants (Ainsworth, 1979 and Hall et al., 1986), and secure infants display optimal long-term health and immune function and social outcomes (Goldberg, 2000). A second interpretation of the theory, the “fitness threat” hypothesis, assigns a special status to negative emotions. It predicts a female superiority that is limited to expressions of negative emotion including fear, disgust, sadness, and anger.1 Because negative emotions signal a potential threat to infant survival (e.g., threats to safety, loss, pain, or the ingestion of a toxin) that calls for action on the caretaker's part—whereas positive expressions carry no such imperative—it is specifically facility in the recognition of negative expressions that may have been selected in the primary caretaker and in which a female superiority may therefore be found. By tying the sex difference to parental roles, the fitness threat hypothesis offers an alternative to theories based on individual survival, which predict either no sex difference in the ability to discriminate threat or a female advantage limited to single emotions (e.g., anger, where a sex difference would be adaptive in allowing physically weaker females to preemptively avoid acts of physical aggression, usually initiated by males; Goodall, 1986 and Konner, 1982). Although both sexes have a stake in infant survival, the ability to swiftly and accurately identify potential threat is a basic adaptation to the role of primary caretaker and would be maximized in the sex having the largest investment in each offspring. Finding a female advantage that is selective to negative emotions would constitute support for the fitness threat hypothesis. Evidence of a female superiority in identification of facial expressions is mixed. Of 55 studies reviewed by Hall (1978), only 11 (20%) found a significant superiority for females in judging emotions based on visual cues alone (conveyed by the face and/or body). Studies using the Profile of Nonverbal Sensitivity have yielded a median effect size of r=.15 in favor of women when only facial cues were available for decoding ( Hall, 1984). A meta-analysis by McClure (2000) found a smaller but statistically significant female advantage among children and adolescents. These effect sizes conceal substantial variability across studies in the size and even the direction of the sex difference. Obtained differences ranged from d=1.86 to d=−0.60 in the 55 studies reviewed by Hall. Inconsistency is to be expected if the female advantage does not encompass all facial expressions of emotion since most studies do not assess the full range. On the other hand, failures to find a sex difference could simply reflect methodological factors. Many studies have used face exposure times in the 10- to 15-s range or up to 1 min. This lengthy time allowance lacks ecological validity since facial expressions are often fleeting and since accuracy of decoding depends on the speed with which an expression can be apprehended. Female superiority in perceptual speed, the ability to rapidly absorb the details of a visual stimulus, has been recognized since the 1940s (Harshman et al., 1983, Kim & Petrakis, 1998, Tyler, 1965 and Wesman, 1949) and generalizes to many types of visual stimuli. Since facial decoding involves, under natural conditions, speeded apprehension of visual detail, it is important to rule out the possibility that any female advantage is based on nothing more than a perceptual speed advantage. In that case, evolutionary explanations based on child-rearing would be inappropriate. Previous work has not included methodological controls to rule out this possibility. The present study was designed to test whether a female advantage in the discrimination of emotional expressions can be verified among young adults of reproductive age. Differences in accuracy and response times (RTs) were evaluated. Secondly, we wished to investigate whether any advantage applies equally to all emotions regardless of hedonic valence, as predicted by the attachment promotion hypothesis, or is differentially found among the negatively valenced emotions, as predicted by the fitness threat hypothesis.2
نتیجه گیری انگلیسی
3. Results Analysis of the scores on the Verbal Meaning Test revealed that the men (M=26.77, S.D.=7.54) and women (M=25.61, S.D.=10.61) were well matched in general level of ability, t(60)=0.50, p=.621. Data from the experimental conditions were evaluated to test whether a female advantage was present in the discrimination of emotional expressions and whether any advantage was differentially seen among the negatively valenced emotions, as predicted by the fitness threat hypothesis. 3.1. Accuracy Level of accuracy in the practice condition (Facial Matching) was extremely high, demonstrating excellent acquisition of the basic stimulus presentation and response procedure in both sexes. The mean percent correct was 98.25% (S.D.=2.59) for men and 98.65% (S.D.=2.50) for women. Similar high levels of accuracy were seen in the Facial Identity (M=97.20, S.D.=3.66 and M=96.34, S.D.=3.59) and Pattern Matching conditions (M=99.46, S.D.=1.78 and M=98.79, S.D.=2.45) for men and women, respectively. Likewise, accuracy of identification in the Facial Emotion condition was very high, with scores of ~90% or above for all emotions except for disgust (M=84.67, S.D.=14.32 and M=90.00, S.D.=7.88) and anger (M=50.65, S.D.=27.20 and M=60.97, S.D.=25.61). Only for the latter two emotions, where accuracy failed to reach ceiling, was there any indication whatsoever of a sex difference: t(45)=1.79, p=.081 for disgust and t(60)=1.54, p=.129 for anger (two-tailed a priori test). Because the scores for the other emotions were at or near ceiling values, sex differences could not be analyzed meaningfully. Therefore, all further statistical analysis focused on the RT data. 3.2. Response times Means and standard errors for each of the six emotions are shown in Fig. 2. To evaluate if a sex difference was present in the discrimination of emotional expressions, we entered the RTs for the six emotions and three control conditions into a two-way mixed effects analysis of variance (ANOVA), with sex as the between-subjects factor and condition as a within-subjects factor. Three participants (two females, one male) who had average RTs greater than 3 S.D. from the group mean were omitted from the analysis, resulting in a sample of 59. The results showed a significant main effect of sex, F(1, 57)=7.19, p=.010, and a significant interaction between sex and condition, F(5, 272)=5.68, p<.001. There was also a main effect of condition, F(5, 272)=84.08, p<.001, reflecting the fact that RTs were faster in the control conditions (Facial Matching, Pattern Matching, and Facial Identity) than in the emotion conditions (all p values <.001). The control conditions included the Facial Identity task, in which individual identities had to be decoded to make a correct match. Pairwise comparisons showed that of the six emotions, happy faces elicited significantly shorter RTs than all other expressions (p values<.001), while angry faces elicited longer RTs than all others (p values<.025), except fear (p=.070) or neutral expressions (p=.060). Mean RTs for men and women in the six emotion conditions and three control ... Fig. 2. Mean RTs for men and women in the six emotion conditions and three control conditions. Bars represent standard errors of the means. Asterisks indicate a significant sex difference (p<.05 or less). Figure options Tukey tests were used to decompose the significant interaction effect. The sex difference was not significant in any of the control conditions (all p values>.05). There was no sex difference in identification of happy faces (p<.10), but women were significantly faster than men at discriminating neutral faces (p<.05), as well as faces depicting disgust (p<.025), fear (p<.025), sadness (p<.01), and anger (p<.01). Thus, a sex difference in favor of women appeared selectively in the emotion conditions and not in conditions requiring other visual discriminations. The selectivity of the effect suggested that a female advantage in simple perceptual speed was not the basis for the sex difference. Nevertheless, associations between RTs and performance on a conventional test of perceptual speed, the Identical Pictures Test, were evaluated. As expected, women (M=69.98, S.D.=13.50) tended to show faster performance on the perceptual speed test than men (M=65.79, S.D.=14.17), although this was not significant, t(60)=1.19, p=.119 (one tailed). Scores on the test were correlated significantly with the RTs in five of the six emotion conditions. Therefore, the ANOVA was repeated using the Identical Pictures score as a covariate and, also, in a separate analysis, using the RT in the Pattern Matching condition as a covariate. The Pattern Matching condition was expressly devised to control for perceptual speed while closely matching the emotion conditions in all other stimulus presentation and response characteristics. It provided a direct estimate of visual decoding and RT in the absence of facial stimuli. Therefore, of the two control tasks, it was considered the superior covariate. Correlations between Pattern Matching and the emotion conditions ranged from r=.51 to .65. With perceptual speed controlled, the main effect of sex was still highly significant, F(1, 56)=6.08, p=.017, either when Identical Pictures was used as the covariate or when Pattern Matching was used as the covariate, F(1, 56)=14.79, p<.001. The significant interaction between sex and condition was also preserved, F(5, 271)=5.21, p<.001 and F(6, 316)=4.18, p=.001, for the two covariates, respectively. Thus, the female superiority in discriminating emotional expressions was retained when perceptual speed was explicitly controlled. The fitness threat hypothesis predicted that the female superiority would be larger for negative emotions than for positive ones. Inspection of the means in the six emotion conditions revealed that the sex difference was indeed larger for each of the four negative emotions than for either of the two positive emotions. To investigate this formally, we performed a two-way mixed effects ANOVA on the Negative and Positive composite scores that represented the mean RT for each participant averaged across the four negative and two positive conditions, respectively. Pattern Matching was used as a covariate to remove variance associated with nonemotive parts of the task. Sex was the between-subjects factor and valence (Positive or Negative) was a within-subjects factor. The results revealed a significant main effect of sex, F(1, 59)=6.88, p=.011. In both categories of emotion, women showed consistently shorter RTs than men. Importantly, the interaction between sex and valence was also significant, F(1, 59)=4.00, p=.050. The sex difference was larger for negative emotions than for positive ones, as predicted by the fitness threat hypothesis ( Fig. 3). Although several of the negative emotions were harder to identify than the positive ones, and thus elicited slower RTs (see above), women showed a processing advantage relative to men in decoding the negative emotions. To investigate if the effect was robust, we computed a ratio score using the two composites for each person: (Negative RT−Positive RT)/Pattern Matching RT, a multiplicative instead of additive adjustment for nonemotive factors. The results were essentially identical. A t test on the resulting scores showed that, on average, women could identify negative emotions nearly as adeptly as positive ones, with only a 9% change in RT, while men showed a 27% increase in processing time for the negative emotions, t(60)=2.01, p=.049 (two tailed; Fig. 3). Analysis of the positive and negative composite scores revealed that the sex ... Fig. 3. Analysis of the positive and negative composite scores revealed that the sex difference in RT was larger for negative emotions than for positive emotions. Inset: Men showed a nearly 30% increase in processing time for negative emotions over positive ones, while women showed only a 9% increase. Pattern Matching was used to adjust for individual differences in basal response speed (see text). Figure options 3.3. Other control tasks Men and women were equally accurate at generating verbal labels on the Facial Labeling task, t(60)=1.34, p=.187. The mean for men was 4.68 correct (S.D.=1.05) and the mean for women was 5.03 correct (S.D.=1.05), out of a possible value of 6. Both sexes were able to capture the emotions portrayed with a high level of verbal accuracy. Experiences with children or the theater were analyzed using t tests. The purpose of the analyses was to discover whether any experiential differences existed between the two sexes, which might confer an advantage in the recognition of emotional expressions. Sixteen of the 62 participants (26%) did report drama or theater experience and 75% of these were women. However, Pearson correlations revealed that theater experience did not correlate significantly with RT on any of the emotion tasks (−.12<r<.20). Scores on the childcare variable ranged from 0 to 20 (out of 25). Women reported more childcare experience (M=8.42, S.D.=4.15) than men (M=5.45, S.D.=3.13), t(60)=3.18, p=.002. However, there was no evidence that greater experience with children was associated with better recognition times, either for the six emotions individually or for the two valence composites (men: r=−.15 to −.02; women: r=−.03 to .10, n.s.).