شناخت دچار اختلال شده از احساسات موسیقایی و حالات صورت بدنبال اکسیزیون لوب تمپورال قدامی داخلی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37794||2011||10 صفحه PDF||سفارش دهید||6918 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Cortex, Volume 47, Issue 9, October 2011, Pages 1116–1125
Abstract We have shown that an anteromedial temporal lobe resection can impair the recognition of scary music in a prior study (Gosselin et al., 2005). In other studies (Adolphs et al., 2001 and Anderson et al., 2000), similar results have been obtained with fearful facial expressions. These findings suggest that scary music and fearful faces may be processed by common cerebral structures. To assess this possibility, we tested patients with unilateral anteromedial temporal excision and normal controls in two emotional tasks. In the task of identifying musical emotion, stimuli evoked either fear, peacefulness, happiness or sadness. Participants were asked to rate to what extent each stimulus expressed these four emotions on 10-point scales. The task of facial emotion included morphed stimuli whose expression varied from faint to more pronounced and evoked fear, happiness, sadness, surprise, anger or disgust. Participants were requested to select the appropriate label. Most patients were found to be impaired in the recognition of both scary music and fearful faces. Furthermore, the results in both tasks were correlated, suggesting a multimodal representation of fear within the amygdala. However, inspection of individual results showed that recognition of fearful faces can be preserved whereas recognition of scary music can be impaired. Such a dissociation found in two cases suggests that fear recognition in faces and in music does not necessarily involve exactly the same cerebral networks and this hypothesis is discussed in light of the current literature.
. Introduction As initially proposed by Klüver and Bucy (1939), in monkeys the amygdala appears to be essential to process fearful signals. This well established finding has been recently confirmed in humans by Lanteaume et al. (2007) who were able to induce negative states, such as fear, by electrical stimulation of the amygdala. Several neuropsychological studies have provided additional support for this hypothesis by investigating patients with amygdala lesions. In these studies, patients who had undergone a unilateral medial temporal lobe excision that included the amygdala for the treatment of medically intractable epilepsy demonstrated a deficit in recognizing fearful facial expressions, although they could usually recognize happy faces (Adolphs et al., 2001, Anderson et al., 2000, Burton et al., 2003, Hlobil et al., 2008, McClelland et al., 2006 and Palermo et al., 2009). The specific role of the amygdala in the recognition of fearful faces was also confirmed in patient SM who presented a selective and bilateral amygdala lesion (Adolphs et al., 1994 and Adolphs et al., 1995). Further evidence for the relationship between the amygdala and the perception of threat was provided by neuroimaging data obtained in healthy participants. These studies have observed an increase of activation in the amygdala when fearful facial expressions were shown to the participants, as compared with happy faces (Breiter et al., 1996, Experiment 1; see Calder et al., 2001 for a review; Morris et al., 1998a, Morris et al., 1998b and Whalen et al., 1998, but see also Sergerie et al., 2008). Thus, both neuroimaging and neuropsychological studies consistently relate the processing of fearful faces to the amygdala. However, the specific role of the amygdala in processing emotional expressions of faces has been questioned. This structure seems to be more generally involved in recognizing threat signals in facial and nonfacial stimuli, including music. Music constitutes an efficient means of inducing fear and suspense (e.g., in movies, Cohen, 2001). While the perception of scary music can be influenced by culture (e.g., by prior exposure to scary movies), it seems to be universally recognized (see Fritz et al., 2009). In this recent study, emotion recognition from western music was assessed in native African participants (Mafa) who have limited exposure to western culture. Results showed that Mafa participants recognized happy, sad, and fearful western music above chance, suggesting that the expression of these basic emotions in western music can be recognized universally, as are facial expressions (Ekman et al., 1969 and Elfenbein and Ambady, 2002). Although the capacity to identify scary music seems universal, this ability can be disturbed after brain lesion. In a previous study, we showed that patients with unilateral medial temporal lobe removal were impaired in recognizing scary music and to a lesser degree peaceful music, whereas recognition of other emotions, such as happiness was spared (Gosselin et al., 2005). These results suggest that both the right and the left anteromedial temporal lobes (including the amygdala) play a role in the recognition of threat in a musical context. To confirm the critical role of the amygdala in this ability, the same methodology was used in SM who had presented a bilateral lesion limited to the amygdala (Gosselin et al., 2007). As predicted, SM was unable to recognize scary musical excerpts but she also demonstrated difficulties in processing sadness, whereas the recognition of the other emotional categories was not impaired. Recent neuroimaging studies provided further support to this conclusion by using visual stimuli in combination with musical backgrounds. Baumgartner et al. (2006) showed that the activation of the amygdala was higher when negative pictures (e.g., fearful pictures of the International Affective Picture System, Lang et al., 2005) were presented with scary music (congruent condition) than when these were combined and presented with positive music (incongruent condition). Similarly, an increase in amygdala responses was observed when emotionally neutral movies were combined with scary music as compared to the condition where the movies were presented without music (Eldar et al., 2007). Finally, a study by functional magnetic resonance imaging (fMRI) showed that listening to scary music is sufficient to activate the amygdala, especially when participants listened to the musical excerpts with closed eyes, as compared with opened eyes (Lerner et al., 2009). All of these neuropsychological and neuroimaging results are in agreement with the hypothesis that the amygdala is involved in processing fear as expressed by music (see Koelsch, 2010, and Peretz, 2010, for recent reviews on musical emotions). Taken together, prior studies suggest that the amygdala might be a multimodal structure. Being anatomically connected to different associative areas, including visual and auditory areas (Aggleton and Saunders, 2000), it is not surprising that the amygdala can be involved in processing fearful signals independently of their modality. However, prior attempt to find evidence for the multimodal involvement of the amygdala in fear processing by using vocal expression stimuli (including speech prosody) have yielded inconsistent results. For instance, some neuropsychological studies found impaired fear recognition for both facial and vocal expressions in the same patients with amygdala damage (e.g., patient DR: Calder et al., 1996 and Scott et al., 1997; patient NM: Sprengelmeyer et al., 1999; and patients with unilateral lesion: Dellacherie et al., 2011a). Conversely, other patients showed difficulties in recognizing fearful faces, while they were normal at recognizing fearful vocal expressions (e.g., patient SM: Adolphs et al., 1994 and Adolphs and Tranel, 1999, patient SP: Anderson and Phelps, 1998a and Anderson and Phelps, 1998b). Neuroimaging experiments have also yielded mixed results. Dolan et al. (2001) found an enhanced activation in the amygdala when fearful faces were presented with congruent fearful voices as compared with incongruent happy voices. However, Pourtois et al. (2005), by adding single modality conditions, found activation in the amygdala when fearful faces were presented alone, but no activation when the fearful voices were presented alone. This lack of convergence in recruiting the amygdala across visual and auditory modality, more particularly with vocal expressions, is intriguing. The multimodal role of the amygdala in the recognition of fear expressed by visual and auditory modalities can also be assessed by using another powerful threatening auditory signal, namely scary music. As mentioned before, a few neuropsychological studies supported the multimodal implication of the amygdala in recognizing both fearful faces (e.g., Adolphs et al., 1994 and Adolphs et al., 2001) and scary music (i.e., Gosselin et al., 2005 and Gosselin et al., 2007). However, these isolated findings are compatible with the existence of distinct neural pathways for music and faces. By testing the same patients (hence the same lesions), we can test whether or not fear-related emotions are supported by the same neural network across the auditory and visual domains. To our knowledge, there is only one exception in the literature that explored recognition of emotions evoked by both faces and music in a single patient. It concerns the well known case of SM who presented emotional deficits with scary faces and music after a bilateral lesion limited to the amygdala (Adolphs et al., 1994 and Gosselin et al., 2007). However, no investigation confirmed the co-occurrence of emotional deficits in both facial and musical domains in patients with a unilateral amygdala lesion. Moreover, the multimodal nature of SM’s deficits was questioned by the lack of impairment when emotional prosody was used, SM being able to recognize fear from prosody (Adolphs and Tranel, 1999). To further investigate the involvement of the amygdala in processing visual and auditory threatening signals, we tested the same patients with unilateral temporal lobe lesions in two different tasks assessing the emotional recognition of faces and music. All patients included in this study had undergone unilateral anteromedial temporal lobe resection to control medically intractable epilepsy. The excision typically removed the amygdala and surrounding neural tissue, such as the temporal pole, the rhinal cortex and the hippocampus. The goal of the present study was to assess whether deficits in recognizing scary music and fearful faces would co-occur in the same patients. Based on the literature, we predicted that those patients with unilateral temporal lobe resection (including the amygdala) would be impaired in the recognition of both scary music and fearful face, while some other categories of emotion, particularly happiness, should be similarly recognized by patients and controls across modalities.
نتیجه گیری انگلیسی
. Results Statistical analyses were carried out for each task separately. As we found no evidence supporting the cerebral lateralization of fear recognition in the literature (Sergerie et al., 2008 and Gosselin et al., 2005), we combined the scores of patients with either right (RR) or left (LR) side resection. Note that this grouping is also supported in the present study by statistical analysis. In the task of facial emotion, an ANOVA considering group (LR, RR) and intended emotion (fear, happy, sad, surprise, angry, disgust) showed no interaction [F(5, 70) = .05, p = .10, η2 = .00] and no group effect [F(1, 14) = 1.24, p = .28, η2 = .08]. Similarly, in the task of identifying musical emotion, both the interaction [i.e., group (LR, RR) × intended emotion (scary, peaceful, happy, sad)], and the effect of group were not significant [F(3, 42) = 1.06, p = .38, η2 = .07, and F(1, 14) = .40, p = .54, η2 = .03, respectively]. We then compared the scores obtained by the patients with those obtained by matched normal controls. We subsequently tested the correlations between recognition of emotions obtained by the patients in those two tasks. 3.1. Task of identifying musical emotion Since participants were free to select as many of the four emotion labels as they wished and to provide a graded judgment for each, we first derived the best label attributed to each musical excerpt by each participant. This derived measure constitutes the best comparison with a four alternative forced-choice (Adolphs and Tranel, 1999) and was also used in several previous studies (e.g., Adolphs and Tranel, 1999, Gosselin et al., 2005, Gosselin et al., 2007 and Vieillard et al., 2008). Consequently, comparisons with previous work is easier. The derivation of best label was done by selecting the label that had received the maximal rating. In this analysis, when the maximal rating corresponded to the label that matched the intended emotion of the composer, a score of 1 was given. When the maximal rating did not correspond to the intended emotion, a score of 0 was given. When the maximal rating was given for more than one label, the response was considered as ‘ambivalent’ and received a score of 0. As can be seen in Fig. 1, control participants recognized all four intended musical emotions. Sadness and peacefulness were more difficult to recognize, whereas threat and happiness were clearly identified. In contrast, patients’ judgments differed somewhat, especially for the scary and peaceful stimuli (see Fig. 1). Mean percentages of the derivation of best label obtained by the patients and ... Fig. 1. Mean percentages of the derivation of best label obtained by the patients and the controls, as a function of the four emotions. Error bars represent standard errors. Stars indicate difference between groups. Figure options The percentages of correct derivations of the intended emotions expressed by music were submitted to an ANOVA considering group (patients, controls) as between-subject factor, intended emotion (fear, peacefulness, happiness, sadness) as within-subject factors, and age, education, and musical background as covariates. Because the interaction between Group and intended Emotion was significant [F(3, 81) = 2.75, p = .048, η2 = .09] when demographic variables were considered as covariates, those covariates were no longer considered in the following analysis. Patients and controls recognized the intended emotions differently, as attested by a significant group by intended emotion interaction [F(3, 90) = 3.54, p = .02, η2 = .11], and a significant main effect of group [F(1, 30) = 14.49, p = .001, η2 = .33]. Scary [t(30) = 4.67, p = .0005, d = 1.65, using a one-sided t-test and alpha corrected threshold, i.e., alpha = .025, Bonferroni] and peaceful music [marginally significant with t(106) = 3.02, p = .06, d = 1.09, by post-hoc comparison] were less well recognized by patients, as compared to controls. In contrast, the performance of patients and controls did not differ for happy [t(30) = 1.17, p = .125, d = .41, using a one-sided t-test and alpha corrected threshold, i.e., alpha = .025, Bonferroni], and sad music [t(106) = 1.15, p = .94, d = .36, by post-hoc comparison]. Note that the main effect of intended emotion was also significant, with F(3, 90) = 18.88, p = .001, η2 = .39. To exclude the possibility that the deficit in fear recognition in our patients was not a consequence that scary music was harder to identify than other stimulus categories, we compared scores for the four emotions in control participants. Results demonstrated that scary stimuli were not more difficult to recognize than any other stimulus category. More precisely, scary music was as easily recognized as happy [t(90) = .75, p = 1.00, d = −.45] and peaceful music [t(90) = 2.11, p = .41, d = .83], and was easier to recognize than sad music [t(90) = 4.29, p = .001, d = 1.02]. In order to determine if the intensity judgments of the musical emotion clips, particularly for the scary music, differed between the groups, the raw ratings were further examined. As can be seen in Fig. 2, results obtained were very similar results to derivation of best label. The mean rating indicated that the scary [t(30) = 3.10, p = .002, d = 1.10] and peaceful stimuli [t(73) = 3.10, p = .05, d = 1.21] were judged as less intense by the patients than by control participants. The performance of patients and controls did not differ for happy [t(30) = .84, p = .21, d = .30], and sad music [t(73) = 1.33, p = .89, d = .38]. The difference between patients and controls was supported by a significant group × intended emotion interaction [F(3, 90) = 3.26, p = .03, η2 = .10]. Significant main effects of group [F(1, 30) = 7.12, p = .01, η2 = .19] and of intended emotion were also obtained [F(3, 90) = 14.87, p = .001, η2 = .33]. Mean rating given by the patients and the controls as a function of the four ... Fig. 2. Mean rating given by the patients and the controls as a function of the four emotions. Error bars represent standard errors. Stars indicate significant difference between groups. Figure options The contribution of the resection size (i.e., the sum of the remaining volumes for the parahippocampal cortex, entorhinal cortex, perirhinal cortex and hippocampus) to the evaluation of scary music was also examined (see Fig. 3, left panel). The performance as expressed by the derivation of best label obtained by the patients did not significantly correlate (r = .11, p = .71) with resection size. The correlation between individual scores and the sum of the remaining neural ... Fig. 3. The correlation between individual scores and the sum of the remaining neural tissue (i.e., for the parahippocampal cortex, entorhinal cortex, perirhinal cortex and hippocampus) is plotted for the scary music (left panel) and fearful faces (right panel). Star indicates significant correlation. Figure options 3.2. Task of facial emotion identification The mean percentages of correct responses (see Fig. 4) of the intended emotion expressed by faces were submitted to an ANOVA considering group (patients, controls) as a between-subjects factor, intended emotion (happiness, sadness, fear, anger, disgust and surprise) as a within-subject factor, and age, education, and musical background as covariates. The interaction between group and intended emotion was significant [F(5, 130) = 2.34, p = .045, η2 = .08] when demographical variables were considered as covariates. Consequently, the following analysis was made without those covariates. The two groups of participants recognized the emotions differently, as attested by a significant group by intended emotion interaction, with F(5, 145) = 2.39, p = .04, η2 = .08. Fearful faces were less well recognized by patients [t(29) = 2.00, p = .025, d = .72 1] as compared to controls. In contrast, the performance of patients and controls did not significantly differ for any other facial emotion [happiness, t(29) = 1.59, p = .06 (alpha threshold = .025), d = .57; anger, t(144) = 2.13, p = .60, d = .89; sadness, t(144) = .03, p = 1.00,, d = .02; disgust, t(144) = −.34, p = 1.00, d = −.09; and surprise t(144) = −.39, p = 1.00, d = −.20]. Mean percentages of the correct responses given by the patients and the ... Fig. 4. Mean percentages of the correct responses given by the patients and the controls, as a function of the six facial expressions. Error bars represent standard errors. Stars indicate significant differences between groups. Figure options In order to verify that the observed difference between patients and controls was not due to the possibility that fearful faces were more difficult to recognize than other emotions, we compared performance for the six emotions in controls. These analyses indicated that the score obtained by controls for fearful faces was not significantly below performance for any other emotion [happiness, t(145) = 2.02, p = .68, d = .66; sadness t(145) = 1.07, p = 1.00, d = .38; anger, t(145) = .60, p = 1.00, d = .17; disgust, t(145) = .24, p = 1.00, d = .05; and surprise, t(145) = .71, p = 1.00, d = .28]. Moreover, the influence of the resection size (i.e., the sum of the remaining volumes for the parahippocampal cortex, entorhinal cortex, perirhinal cortex and hippocampus) on the evaluation of fearful facial expressions was further explored (see Fig. 3, right panel). The patients’ scores for fearful faces were found to correlate significantly with resection size (r = .71, p = .003). The larger the removal (the less remaining tissue), the more impaired was the patient’s recognition of fearful faces. 3.3. Correlations between emotional music and faces Correlations between recognition of fear, happiness and sadness obtained by the patients in the musical and the facial tasks were computed. As can be seen in Fig. 5, a significant correlation was found between the scores obtained with music and faces for the category of fear only (r = .53, df = 16, p = .04). The correlations between music and faces were not significant for the other emotions (happiness, r = −.05, p = .86 and sadness, r = .24, p = .37). Correlation between mean percentage of derivation of best label for music and ... Fig. 5. Correlation between mean percentage of derivation of best label for music and mean percentage of correct responses for face given by the patients for fear (left panel), happiness (right panel), and sadness (lower panel). Star indicates significant correlation between emotional tasks. Figure options Interestingly, individual inspection showed that two patients did not present this pattern of results, as they were severely impaired in the recognition of scary music but not in the recognition of fearful faces. Demographic variables and IQ scores for those two patients (P11 and P14) as well as for the remaining 14 patients are presented in Table 2. Since P11 and P14 were within the range of other patients for general education, musical background, and IQ scores, these factors did not seem to account for the dissociation. Remaining volumes for P11, P14, and the mean volume of other patients are also presented in Table 3. The sum of all remaining structures for P11 and P14 were in the upper range of the patient group. However, this did not seem to fully explain the dissociation because other patients, who also had a restricted lesion, did not show this pattern of dissociation in fear recognition between face and music. Table 3. The remaining raw volumes of the resected side for parahippocampal, perirhinal and entorhinal cortex and hippocampus, as well as the sum of those four structures are expressed in cm3 for two patients (P11 and P14) that presented a fear dissociation between music and faces. The mean, SD and range of the 13 patients without dissociation are also presented. Parahippocampal Perirhinal Entorhinal Hippocampus Sum Patients, n = 13 Mean (SD) 1.10 (.41) .17 (.37) .04 (.12) .14 (.18) 1.39 (.59) Range .59–2.23 0–1.34 0–.33 0–.60 .59–3.82 P11 2.11 .74 .53 .60 3.97 P14 2.17 .65 .34 .20 3.37