دانلود مقاله ISI انگلیسی شماره 37787
ترجمه فارسی عنوان مقاله

آیا ما حالات صورت احساسات افراد مبتلا به اسکیزوفرنی را تشخیص دهیم؟

عنوان انگلیسی
Do we recognize facial expressions of emotions from persons with schizophrenia?
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
37787 2010 7 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Schizophrenia Research, Volume 122, Issues 1–3, September 2010, Pages 144–150

ترجمه کلمات کلیدی
حالات صورت استاتیک - عبارت احساسات - تشخیص احساسات - اسکیزوفرنی
کلمات کلیدی انگلیسی
Static facial expressions; Emotion expressions; Emotion recognition; Schizophrenia
پیش نمایش مقاله
پیش نمایش مقاله  آیا ما حالات صورت احساسات افراد مبتلا به اسکیزوفرنی را تشخیص دهیم؟

چکیده انگلیسی

Abstract Objectives: Impaired facial emotion expression is central to schizophrenia. Extensive work has quantified these differences, but it remains unclear how patient expressions are perceived by their healthy peers and other non-trained individuals. This study examined how static facial expressions of posed and evoked emotions of patients and controls are recognized by naïve observers. Methods: Facial photographs of 6 persons with stable schizophrenia and 6 matched healthy controls expressing five universal emotions (happy, sad, anger, fear, and disgust) and neutral were selected from a previous data set. Untrained raters (N = 420) viewed each photo and identified the expressed emotion. Repeated measures ANOVAs were used to assess differences in accuracy and error patterns between patient and control expressions. Results: Expressions from healthy individuals were more accurately identified than those from schizophrenia patients across all conditions, except for posed sadness and evoked neutral faces, in which groups did not differ, and posed fear, in which patient expressions were more accurately identified than control expressions. Analysis of incorrect responses revealed misidentifications as neutral were most common across both groups but significantly more likely among patients. Conclusion: Present findings demonstrate that patient expressions of emotion are poorly perceived by naïve observers and support the concept of affective flattening in schizophrenia. These results highlight the real world implications of impairments in emotion expression and may shed light on potential mechanisms of impaired social functioning in schizophrenia.

مقدمه انگلیسی

Introduction Impaired facial expressions of emotion represent an area of behavioral dysfunction that is central to the illness of schizophrenia and indicates both current functioning and future outcome (Gur et al., 2006). Much of our knowledge on affective flattening in schizophrenia has been based on clinical rating scales of global affect during psychiatric interviews, but the reliability of ratings of affective flattening and other negative symptoms can be problematic (Andreasen, 1997). Over the past 20 years, efforts beyond global ratings of facial expressions by a single trained or experienced rater have attempted to apply standardized procedures to examine specific face regions based on facial muscles, and quantify the degree and time course of emotional expressions. Several groups have utilized paradigms based on the Facial Action Coding System (Ekman and Friesen, 1978) to demonstrate generalized and emotion specific deficits in posed and evoked facial expressions. Findings suggested that patients do not engage the same facial muscles as controls (Aghevli et al., 2003, Berenbaum & Oltmanns, 1992, Blanchard et al., 1994, Gaebel & Wolwer, 2004, Kohler et al., 2008b, Kring et al., 1993 and Schneider et al., 1990), and that they make facial expressions less frequently than controls (Tremeau et al., 2005), and for shorter duration (Tremeau et al., 2005 and Kohler et al., 2008a). Alternatively, more subtle expressions have been quantitatively analyzed through electromyographic activity in specific facial muscles, with mixed findings. When exposed to emotionally evoking stimuli and interviews, patients show less zygomaticus activity in the lower face during happy interviews and more movements than controls during sad interviews (Mattes et al., 1995). Other EMG findings however suggest patient facial movements are similar to healthy subjects when viewing positively and negatively valenced static images (Kring et al., 1999). More recently, Putnam and Kring (2007) instructed trained, reliable raters to identify emotions in posed expressions and reported that patients less accurately portrayed surprise and sadness, but not disgust, fear, happiness, or anger. Despite the extensive work devoted to quantifying how patient expressions differ from those of healthy individuals, comparatively little attention has been given to understanding how patient expressions are perceived by untrained healthy persons. While scientifically less rigorous, this approach may have greater ecological validity in revealing how emotion expressions of persons with schizophrenia are recognized in the community and may also have greater relevance with respect to possible effects on interpersonal engagement and social functioning. Gottheil et al. (1970) displayed participants' evoked emotional video clips to untrained raters, finding that controls' “affective themes” were more correctly identified than patients', but only significantly in anger. Similarly, Gottheil et al. (1976) instructed untrained judges to match emotional and neutral images from patients and controls with corresponding emotion words. Patients had less identifiable expressions of anger and sadness, but differences were not found across all emotions. Consistent with these findings, Winkelmayer et al. (1978) found that happy was best recognized and sad and angry faces had the lowest recognition rates. Braun et al. (1991), utilized a similar matching paradigm, expanded the stimuli to include the five universal emotions and surprise, and found an overall deficit in patient facial expressions but not in any particular emotion. Conversely, Flack et al. (1997) found emotional expressions of depressed persons, schizophrenia patients, and controls to be equally recognizable by naïve undergraduate raters. Due to these mixed findings, further examination of how the general population interprets facial expressions of persons with schizophrenia may provide valuable new insight into the extent of emotion expression impairments and their potential impact on social functioning. The present study expanded on Gottheil's original model and utilized a computerized test of facial expressions of emotion from schizophrenia patients and matched controls. Posed and evoked emotion expressions of five universal emotions were collected in a standardized, empirically validated procedure that has previously been described in detail (Kohler et al., 2008b). These photos were displayed to a large group of untrained observers who identified the emotion expressed on each face. Based on the work reviewed above, we hypothesized that in comparison with controls (1) patients' expressions would be more poorly recognized, particularly in the evoked condition, which is more spontaneous and under less volitional control, (2) based on Gottheil et al.'s findings (1976), when separated by emotion, patients' expressions of sadness and anger would be less accurately recognized, and (3) consistent with the presence of flat affect in schizophrenia, patients' expressions would be more likely misidentified as neutral.

نتیجه گیری انگلیسی

3. Results Consistent with our first prediction that patient expressions would be more poorly recognized, the analysis of emotion identification accuracy revealed a main effect of stimulus participant group indicating that expressions from healthy individuals were more accurately identified than expressions from individuals with schizophrenia (mean = 60.4% and mean = 50.2%, respectively; F(1,419) = 1097.43, p < .001, ηp2 = .724). Main effects were also evident for condition and emotion such that posed expressions were more often correctly identified than evoked expressions (mean = 61.2% and mean = 49.5%, respectively; F(1,419) = 1606.63, p < .001, ηp2 = .793) and that happiness was the most accurately identified emotion while sadness was the least accurately identified (happy mean = 84.6%, sad mean = 37.2%, anger mean = 44.3%, fear mean = 49.5%, disgust mean = 44.2%, and neutral mean = 72.0%, F(3.45,1445.50) = 856.91, p < .001, ηp2 = .672). Also as predicted, the two-way interaction between stimulus participant group and condition (F(1,419) = 385.61, p < .001, ηp2 = .479) was significant, indicating that recognition of patient expressions was most impaired in the evoked condition. As partially anticipated by our second hypothesis, the stimulus participant group by emotion interaction was also significant (F(4.69,1967.15) = 249.52, p < .001, ηp2 = .373), and more importantly, the three-way interaction between stimulus group, condition, and emotion was significant (F(4.63,1941.05) = 106.02, p < .001, ηp2 = .202). As shown in Fig. 1, follow-up t-tests demonstrated that identification accuracy for patient and control expressions significantly differed in all assessed situations except for sadness in the posed condition and neutral in the evoked condition. In all significant differences, control expressions were more accurately identified than patient expressions except for fear in the posed condition. Here, correct identification for patient expressions was approximately 10% higher than identification of control expressions. Emotion identification accuracy. Mean (±SE) percent correct for controls and ... Fig. 1. Emotion identification accuracy. Mean (± SE) percent correct for controls and patients for posed emotional displays (left) and evoked emotional displays (right). Asterisks denote significant group differences at a Bonferroni corrected level of p < .004. Figure options In examining our third hypothesis, the analysis of incorrect responses revealed a main effect of emotion indicating that across both groups, misidentifications of emotions as neutral were most common whereas misidentifications of emotions as fearful were least common (happy mean = 4.7%, sad mean = 15.7%, anger mean = 22.4%, fear mean = 3.9%, disgust mean = 14.2%, and neutral mean = 39.2%; F(2.19,915.41) = 680.3, p < .001, ηp2 = .619). Moreover, the three-way interaction between stimulus group, condition, and error was also significant (F(3.64,1524.22) = 64.79, p < .001, ηp2 = .134). Follow-up t-tests revealed significant group differences in the number of happy, anger, fear, and neutral misidentifications in the posed condition and in the number of happy, anger, fear, disgust, and neutral misidentifications in the evoked condition. In accord with our hypothesis and as can be seen in Fig. 2, in both conditions, patient expressions were significantly more likely to be incorrectly labeled as neutral. Full confusion matrices are provided in Supplemental Table 1. Error patterns. Mean percentage of incorrect responses that were wrongly labeled ... Fig. 2. Error patterns. Mean percentage of incorrect responses that were wrongly labeled happy, sad, anger, fear, disgust, and neutral. Italicized labels indicate significant group differences at a Bonferroni corrected level of p < .004. Figure options To rule out the possibility that group differences were due to the degree to which stimulus individuals experienced the target emotions, we conducted a repeated measures ANOVA on EERS scores from stimulus individuals with emotion (happy, sad, anger, fear and disgust) and condition (posed and evoked) as within-subject factors and group (patient or control) as the between-subject factor. Only the main effect of condition was significant (F(1,9) = 43.04, p < .001, ηp2 = .827) revealing that emotions during the evoked condition were experienced more strongly than emotions during the posed condition (evoked mean = 7.02, posed mean = 4.08). No other main effects or interactions were significant suggesting that differences in identification accuracy for patients and controls were likely not due to disproportionate degrees of emotional experience during expression ( Table 1). Finally, an assessment of overall accuracy rates for each stimulus individual indicated that identification accuracy for the stimuli of one patient was 3 SD below the mean accuracy for all patient stimuli. Given the potential outlier status of this individual, all analyses were repeated with responses to stimuli from this individual omitted. All main effects and interactions were unchanged, and we therefore opted to leave the original dataset intact.