رمز گشایی از حالات عاطفی صورت در زمینه شرایط احساسی
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
37765 | 2008 | 7 صفحه PDF |

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Neuropsychologia, Volume 46, Issue 11, September 2008, Pages 2615–2621
چکیده انگلیسی
Abstract The ability to recognize other persons’ affective states and to link these with aspects of the current situation arises early in development and is precursor functions of a Theory of Mind (ToM). Until now, studies investigated either the processing of affective faces or affective pictures. In the present study, we tried to realize a scenario more similar to every day situations. We employed fMRI and used a picture matching task to explore the neural correlates associated with the integration and decoding of facial affective expressions in the context of affective situations. In the emotion condition, the participants judged an emotional facial expression with respect to the content of an emotional picture. In the two other conditions, participants indicated colour matches on the background of either affective or scrambled pictures. In contrast to colour matching on scrambled pictures, colour matching on emotional pictures resulted in longer reaction times and increased activation of the bilateral fusiform and occipital gyrus. These results indicated that, although task irrelevant, participants may attend to the emotional background of the pictures. The emotion task was associated with higher reaction times and with activation of the bilateral fusiform and occipital gyrus. Additionally, emotion attribution induced left amygdala activity. Possibly, attention processes and amygdala projections modulated the activation found in the occipital and fusiform areas. Furthermore, the involvement of the amygdala in the ToM precursor ability to link facial expressions with an emotional situation may indicate that the amygdala is involved in the development of stable ToM abilities.
مقدمه انگلیسی
1. Introduction Successful navigation through the human social world demands the ability to attribute mental states to one and others and is called Theory of Mind (ToM; Frith & Frith, 2006; Perner, 1991). The development of ToM abilities proceeds in several steps. Whereas children develop an understanding of knowledge and belief around the age of 3–4 years even 2-year olds possess an understanding of desires and intentions (Wellman, Cross, & Watson, 2001). And the understanding of someone's actions in terms of the actors’ underlying intentions is preceded by the understanding that emotional displays are providing information about specific objects and situations (Wellman & Langutta, 2000). Therefore, the abilities to recognize other persons’ emotions and to link these emotions to the given situation are important precursor functions for developing a ToM (Sodian & Thoermer, 2008). Until now, most studies that investigated the neural correlates of emotion recognition examined either the processing of affective faces (Adolphs, 2002 and Blair, 2003; Vuilleumier & Pourtois, 2007; Winston, O’Doherty, & Dolan, 2003) or affective pictures (Lane et al., 1997 and Lane et al., 1999; Paradiso et al., 2003; Sabatinelli, Bradley, Fitzsimmons, & Lang, 2005). However, real life situations are more complex because they are defined by a combination of different socially relevant stimuli. It is not sufficient only to decode the facial expression in order to assess a social situation. Fridlund (1994) suggests that faces are not purely surfaces on which private affective meanings are somehow made visible but rather tools for communicating behavioral intentions and social motives to specific addressees. In his view, facial expressions relate to how people are likely to act rather than their current subjective emotional experience. Therefore, in ambiguous or emotional situations a person's facial expression may communicate information about approach or withdraw movements toward an object (Parkinson, 2005). For example, by the age of 12 months infants confronted with an ambiguous situation, where they do not know what to do, consult their mother's face for advice (Sorce, Emde, Campos, & Klinnert, 1985). Frijda (1953) found that spontaneous open-ended judgments about pictures of facial movements often referred to situations that might have provoked the observed responses and that emotional interpretations were only offered later. And for detection of deception it is not sufficient to decode nonverbal cues like affective face expressions (Zuckerman, DePaulo, & Rosenthal, 1981) also the situational context of the face movement must be considered. In sum, face expressions carry important information about the situational context within they occur (Carroll & Russell, 1996) and provide hints for an adequate modulation of our own behavior in this situation. Therefore, the proper way of understanding affective facial expressions is to consider the context in which they appear (Parkinson, 2005). Therefore, attributional processes during interaction with other people demand the integration of different sources of information. In order to represent the mental state of a particular individual in a particular situation, both a person's facial expression and the social context must be simultaneously represented and integrated. This demands not only the decoding of an affective face expression but also the consideration how a certain situation influences other people's mental states (Lieberman, 2007). Until now, face processing research focused on the neural mechanisms that are specific for emotion expression recognition (Adolphs, 2002; Haxby, Hoffman, & Gobbini, 2002; Vuilleumier & Pourtois, 2007). Therefore, in most studies the affective face stimuli were presented without any social or affective context. In these studies, participants usually had to recognize the presented affective expression with respect to prescribed emotional adjectives (Adolphs, Damasio, Tranel, & Damasio, 1996; Hennenlotter & Schroeder, 2006) or to make judgements about non-affective aspects like similarity (e.g. Vuilleumier & Pourtois, 2007). It is still under debate whether facial emotion perception is organized in a modular fashion with distinct neural circuitry sub serving individual emotions (Adolphs, 2002) or whether there is a common substrate to the perception of multiple basic emotions (Blair, 2003 and Winston et al., 2003). Whereas neuroimaging studies consistently show activation of the amygdala during the processing of negative, especially fearful, facial expressions (Gorno-Tempini et al., 2001, Iidaka et al., 2001 and Morris et al., 1998; Pessoa, McKenna, Gutierrez, & Ungeleider, 2002; Vuilleumier, Armony, Driver, & Dolan, 2001) or when subjects view faces of people who are perceived in negative ways, like untrustworthy or bizarre (Winston, Strange, O’Doherty, & Dolan, 2002), results on positive facial expressions appear inconsistent (Zald, 2003). Besides the processing of facial affect, neuroimaging studies that dealt with the neural correlates of emotion recognition have commonly used affective pictures, particularly pictures of the International Affective Picture System (IAPS; Lane et al., 1997, Lane et al., 1999, Paradiso et al., 2003 and Sabatinelli et al., 2005). Results revealed that affective pictures and affective faces seem to activate similar brain regions, especially the amygdala and the visual cortex (Phan, Wager, Taylor, & Liberzon, 2002; Zald, 2003). However, until now only some studies have directly compared facial expressions and affective pictures (Britton, Taylor, Sudheimer, & Liberzon, 2006). Britton et al. (2006) presented either facial expressions or IAPS pictures. Independent of the emotional content of the stimuli, facial expressions and IAPS pictures activated similar brain regions, reflected in a common pattern of activation which included the amygdala, posterior hippocampus, ventromedial prefrontal cortex, and visual cortex. A differential pattern of activation was found in superior temporal gyrus, insula, and anterior cingulate. In these regions the emotional faces induced more activation than the IAPS pictures. In sum, there is conclusive evidence that affective face and affective picture processing are associated with activation of similar brain areas, especially the occipital visual cortex and the amygdala. But until now studies investigated either face or picture processing. However, for navigating through the social world it is important to integrate relevant information of different sources. In humans, facial expressions do not automatically display the emotions of a person, as well the social context predicts the probability of an emotional facial expression (Fridlund, 1994). Therefore, for a valid attribution of another person's mental state in everyday situations people must be able to take into account both the person's facial expression and the affective content of the specific situation. In the present study, we focused on the human ability to link affective face expressions with the emotional context of a given situation, which seems to be an important precursor function of ToM (Wellman & Langutta, 2000). In order to realize a scenario somewhat similar to real life situations, we presented simultaneously affective faces and emotional pictures. The participants’ task was to link the emotional content of the situation with the appropriate face expression. Therefore, in the emotion condition, participants had to decide which of two faces matched with the affective situation. In the two other conditions, participants should indicate colour matches either on the background of emotional pictures or on the background of scrambled pictures.
نتیجه گیری انگلیسی
. Results 3.1. Behavioural measures The intensity ratings for both the IAPS and the Ekman picture ratings matched the depicted emotion category. In both conditions, neutral stimuli were rated as neutral (IAPS: M = 8.1, S.D. = 1.0; faces: M = 7.8, S.D. = 1.3), sad stimuli as sad (IAPS: M = 7.9, S.D. = 1.1; faces: M = 8.1, S.D. = 0.9), happy stimuli as happy (IAPS: M = 7.5, S.D. = 1.5; faces: M = 8.4, S.D. = 0.7), fearful stimuli as fearful (IAPS: M = 6.3, S.D. = 2.4; faces: M = 8.0, S.D. = 1.1) and stimuli of disgust as disgusting (IAPS: M = 7.4, S.D. = 1.7; faces: M = 7.9, S.D. = 1.6). Response accuracy (%) and reaction time (ms) were analyzed by two separate repeated measured analyses of variance (ANOVA) with the factor “condition” (emotion/colour unscrambled/colour scrambled, see Fig. 1). Reaction time (ms) and hit rate (%) for the three conditions emotion, colour ... Fig. 1. Reaction time (ms) and hit rate (%) for the three conditions emotion, colour unscrambled (colour_us) and colour scrambled (colour_s). Significant differences are indicated by *p < 0.01. Figure options For response accuracy there was a main effect “condition” (F(2, 34) = 29.26, p < 0.001). t-Tests revealed that in the colour unscrambled (M = 97%, S.D. = 0.034%) and in the colour scrambled condition (M = 99%, S.D. = 0.015%) participants showed significantly higher hit rates than in the emotion condition (M = 92%, S.D. = 0.048%; emotion versus colour unscrambled: t(17) = −5.14, p < 0.001; emotion versus colour scrambled: t(17) = −6.76, p < 0.001). There was no significant difference in accuracy between the colour unscrambled and the colour scrambled condition (t(17) = 1.96, n.s.). For reaction time, there was also a significant main effect “condition” (F(2, 34) = 294.3, p < 0.001). Subsequent t-tests revealed that participants responded significantly faster in the colour unscrambled condition (M = 857.1 ms, S.D. = 161.5 ms) and in the colour scrambled condition (M = 746.1 ms, S.D. = 108.9 ms) than in the emotion condition (M = 1392.1 ms, S.D. = 162.1 ms; emotion versus colour unscrambled: t(17) = 17.42 p < 0.001; emotion versus colour scrambled: t(17) = 19.88, p < 0.001). The difference between the colour unscrambled and the colour scrambled condition was also significant (t(17) = −5.32, p < 0.001) with faster reaction times in the colour_scrambled condition. We considered that these ratings and the response accuracy indicate that the emotion category of the stimuli was relatively unambiguous and participants were able to recognize the emotion category. 3.2. Functional imaging data The results of the two contrasts of interest “emotion > colour unscrambled” and “colour unscrambled > colour scrambled” are summarized in Table 1 and visualised in Fig. 2. Table 1. Regions of increased brain activity associated with emotion and colour attribution Brain region Center MNI coordinates Z-score Voxels (n) x y z Emotion > colour_us Left inferior occipital gyrus −26 −98 0 6.45 580 Right inferior occipital gyrus 32 −96 −6 5.98 293 Left fusiform gyrus −46 −50 −14 5.91 163 Right fusiform gyrus 46 −52 −16 5.48 55 Right fusiform gyrus 40 −66 −14 5.30 20 Right inferior frontal gyrus 54 32 22 5.12 8 Left amygdala −24 −4 −16 5.43 7 Colour_us > colour_s Left inferior occipital gyrus −34 −92 8 5.16 6 Left middle occipital gyrus −42 −78 8 5.53 97 Right middle occipital gyrus 42 −78 −12 5.60 97 Left fusiform gyrus −44 −66 −22 6.31 348 Right fusiform gyrus 38 −34 −20 5.19 6 Right middle temporal gyrus 48 −72 10 5.45 19 Coordinates refer to the Montreal Neurological Institute (MNI) reference brain. Table options Top—examples for the three experimental conditions: emotion, colour_unscrambled ... Fig. 2. Top—examples for the three experimental conditions: emotion, colour_unscrambled and colour_scrambled. In the emotion condition, the affective face matching the emotional content of the affective picture should be indicated independently of the coloured squares. In the two colour conditions, the match of the coloured squares should be indicated independently of the pictures’ content. Bottom—fMRI results for the two contrasts of interest: emotion > colour_unscrambled and colour_unscrambled > colour_scrambled. Only active emotion decoding induces amygdala activation compared to the colour tasks. Effects are significant at p < 0.05 corrected with family-wise error (FWE) for multiple comparisons. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of the article.) Figure options In contrast to colour attribution, emotion attribution resulted in more activation of the bilateral inferior occipital gyrus, the bilateral fusiform gyrus, the right inferior frontal gyrus and the left amygdala. Colour attribution on the background of affective pictures in contrast to colour attribution on the background of scrambled pictures was associated with higher activation of the left inferior occipital gyrus, the bilateral middle occipital gyrus, the bilateral fusiform gyrus and the right middle temporal gyrus. Fig. 3 shows the ROI analysis for the left amygdala and reveals that amygdala activation is only increased during active emotion decoding in the emotion condition. The colour matching task on the background of the affective pictures shows no influence on amygdala activity. Percent signal change in the left amygdala for each condition showing increased ... Fig. 3. Percent signal change in the left amygdala for each condition showing increased responses during active emotion decoding.