تقلید از حالات صورت در اسکیزوفرنی
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
37678 | 2006 | 8 صفحه PDF |

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Psychiatry Research, Volume 145, Issues 2–3, 7 December 2006, Pages 87–94
چکیده انگلیسی
Abstract Diminished facial expressivity is a common feature of schizophrenia that interferes with effective interpersonal communication. This study was designed to determine if real-time visual feedback improved the ability of patients with schizophrenia to imitate and produce modeled facial expressions. Twenty patients with schizophrenia and 10 controls viewed static images of facial expressions and were asked to imitate them. Half of the images were imitated with the use of a mirror and half were imitated without the use of a mirror. In addition, we examined whether practice in imitating and producing expressions improved the ability of participants to generate facial expressions on their own, without the aid of a model or mirror. Participants' facial expressions were photographed with a digital camera and each was rated for accuracy in producing characteristic facial expressions. Patients with schizophrenia were less accurate in imitating and producing facial expressions than controls, and real-time visual feedback did not improve accuracy in either group. Preliminary findings suggest that exposure to model expressions and practice in generating these expressions can improve the accuracy of certain posed expressions in schizophrenia.
مقدمه انگلیسی
Introduction Disturbances of affect in schizophrenia have been recognized since the earliest descriptions of the disorder (Bleuer, 1950 and Kraepelin, 1971). Flattened affect, inappropriate affect, and labile affect are prominent features of schizophrenia that are often used in the diagnosis of the disorder. Broadly defined, affect is an observer-rated assessment of the internal emotional or feeling state of a person. Facial expression, body posture, voice intonation, and motor activity are observable signs used by clinicians to assess affect. Emotional and affective disturbances also contribute to an array of interpersonal and social deficits found in schizophrenia (e.g., Mueser et al., 1996). Persons with schizophrenia are less accurate in identifying facial expressions in others and are less expressive themselves (for reviews, see Mandal et al., 1998 and Pinkham et al., 2003). Facial expressions serve a crucial role in human communication. The expression on a person's face provides a wealth of information about the person, the situation, and feedback about how to respond appropriately. Expressions of anger convey a signal to modify behavior, whereas expressions of happiness can reward and maintain current behavior. In other words, facial expressions help to regulate one's reactions to others. Facial expressions have been rated in schizophrenia using several objective coding methods. Facial Action Coding System (FACS; Ekman and Friesen, 1978) is a method that categorizes facial behavior based on muscular action that changes the appearance of the face. Expressions are decomposed into action units that produce the movement. Emotion Facial Action Coding System (EMFACS), a version of FACS, is used to rate action units in the face as well as the expressed emotion. The Facial Expression Coding System (FACES; described in Kring and Neale, 1996) is used to rate global aspects of expression such as intensity, frequency, and valence. Studies that have used these rating systems yield the consistent finding that facial expressivity is reduced in people with schizophrenia (Schneider et al., 1990, Berenbaum and Oltmanns, 1992, Blanchard et al., 1994, Mattes et al., 1995, Gaebel and Wolwer, 2004 and Tremeau et al., 2005). However, there appears to be a discrepancy between expressed emotion and self-reports of emotional experience. For instance, in the Kring and Neale (1996) study, facial expressions of patients and normal controls were videotaped as they viewed film clips with happy, sad, fearful, or neutral themes. Patients rated the content of the films as emotional as controls did, but patients displayed fewer expressions during the emotional clips. In addition, patients' physiological measures of arousal, measured by skin conductance responses, showed a greater response to films compared to the responses of controls. This incongruence between expressed emotion and self-reported emotional experience has been observed in deficit syndrome patients and in patients specifically selected on the basis of flattened affect (Berenbaum and Oltmanns, 1992 and Earnst and Kring, 1999). Reduced expressions in schizophrenia do not appear to be related to medications, as evidenced by similar findings in unmedicated patients (Kring et al., 1999). Recent treatment approaches in schizophrenia have emphasized the importance of implementing psychosocial treatments in conjunction with pharmacotherapy. Psychosocial treatments, such as social skills training (Bellack, 2004), have yielded moderate success in treating interpersonal and social deficits in schizophrenia. Yet little effort has focused on developing strategies to improve facial expressiveness in these patients. Because facial expressions comprise a key element for effective interpersonal relationships, it is important for people with schizophrenia to develop the ability to convey emotion nonverbally through facial expression. Additionally, according to versions of the facial feedback hypothesis, facial emotion actions, or “facial efference,” can influence subjective emotional experience (for reviews, see Adelmann and Zajonc, 1989 and McIntosh, 1996). Thus, reduced facial expressiveness in patients could lead to aberrant modulation of internal feeling states. This study examined the ability of schizophrenia patients and controls to imitate static images of facial expressions. Participants were asked to produce expressions in response to instructions, representing facial actions that are under voluntary or conscious control as opposed to evoked emotional expression. Such expressions, therefore, are more akin to emotional expressions that are socially regulated, and have been referred to as display rules (Ekman and Friesen, 1976). There were two aims of this study. First, we tested whether “real-time” visual feedback would improve the ability to express emotions. Participants were asked to imitate expressions (e.g., happy, disgust, or anger) under two conditions: with and without the aid of a mirror. The target of this intervention was to generate facial expressions that accurately matched universally recognized expressions. It was expected that imitation of expressions would be more accurate with the help of a mirror. Second, we assessed whether practice in imitating expressions (with and without the mirror) would improve the ability to self-generate expressions, in the absence of the modeled expression. To assess the effects of exposure to modeled expressions, people were asked to produce expressions in response to a verbal instruction, without the aid of a modeled expression or the mirror. A test of facial expression identification was included in the study to examine the relation between recognition and expression of emotions.
نتیجه گیری انگلیسی
Results 3.1. Imitation of facial expressions The score for each photograph was the average of the two raters' scores. Agreement between raters was assessed by intraclass correlation (ICC, Case 3; Shrout and Fleiss, 1979). The agreement between the raters' scores was high for both patients (0.97) and controls (0.92). The data for one control was excluded from the analyses due to experimenter error, yielding a total of nine controls. A 2 (group) × 2 (mirror condition) × 5 (expression) analysis of variance (ANOVA) on the ratings showed the following significant effects: a main effect of group, F(1, 27) = 9.74, P < 0.01, a main effect of expression, F(1, 27) = 20.69, P < 0.001, and an interaction of group × expression, F(4, 108) = 2.83, P < 0.05. The data in Table 1 indicate that the patients were rated as less accurate than controls in all emotion conditions with the exception of neutral. Quite unexpectedly, ratings for each expression were similar in the mirror and no mirror conditions for both patients and controls. There was no beneficial effect whatsoever of providing a mirror, F(1, 27) = 0.00, P > 0.05. This finding suggests that intentional modification of facial features in order to imitate expressions is difficult for patients and controls and does not improve with “real time” feedback provided by a mirror. Given the small sample size of controls in this study, differences between patients and controls were checked using non-parametric tests. A Mann-Whitney U-test comparing the two groups on each of the five expressions (averaged across mirror conditions) yielded similar results as those reported above: Patients were rated as less accurate than controls in all but the neutral expression condition. The results of the Mann-Whitney U-tests are shown in Table 1. The dose of medication in CPZ equivalents was correlated only to ratings for the disgust expression in the mirror condition (r = − 0.53, P < 0.05). Table 1. Mean ratings and standard deviations for imitated expressions in the mirror and no mirror conditions for patients and controls Expression Control Patient P value No Mirror Mirror Average No Mirror Mirror Average Angry Mean 4.8 5.2 5.0 3.9 3.5 3.7 0.01 S.D. 1.1 1.1 0.94 1.6 1.2 1.3 Disgust Mean 5.4 5.5 5.4 3.7 3.9 3.8 0.05 S.D. 1.6 1.0 1.2 1.9 1.5 1.6 Happy Mean 6.1 5.8 5.9 4.9 4.9 4.9 0.055 S.D. 0.88 0.96 0.87 1.5 1.4 1.4 Sad Mean 4.2 4.2 4.2 2.8 3.2 3.0 0.01 S.D. 1.1 1.1 0.91 1.2 1.4 1.0 Neutral Mean 5.8 5.6 5.7 5.7 5.7 5.7 NS S.D. 1.4 1.2 1.2 0.63 0.75 0.57 Significance levels for the Mann-Whitney U-test are shown for comparisons of the average rating for each expression (averaged across mirror and no mirror) between patients and controls. Table options 3.2. Self-generation of expressions Table 2 displays the average rating for each expression produced in response to a verbal instruction (without the aid of a model or mirror) for patients and controls in Phase 1 and Phase 5. Each rating of the participant's expression represents the average of the two raters' scores. The agreement between raters' scores was high for patients (ICC = 0.94) and controls (ICC = 0.91). A 2 (group) × 2 (Phase 1, Phase 5) × 5 (expression) ANOVA performed on ratings yielded a main effect of group, F(1, 26) = 14.29, P < 0.001, main effect of phase, F(1, 26) = 15.87, P < 0.001, and a main effect of expression, F(4, 104) = 20.72, P < 0.001. A Wilcoxon Signed Rank test comparing Phase 1 and Phase 5 across all participants showed a significant increase only in the angry and disgust conditions. Significance levels for these comparisons are shown in Table 2. Table 2. Mean ratings for self-generated expressions in Phase 1 and Phase 5 for patients and controls Expression Control Patients P value Phase 1 Phase 5 Phase 1 Phase 5 Angry Mean 4.4 5.2 3.3 4.2 0.01 S.D. 1.4 1.0 1.9 1.5 Disgust Mean 3.1 4.3 2.5 3.3 0.05 S.D. 1.2 1.1 1.1 1.5 Happy Mean 5.6 5.6 4.5 4.7 NS S.D. 1.0 1.2 1.5 1.5 Sad Mean 4.3 4.4 2.6 2.9 NS S.D. 1.2 1.7 1.4 1.3 Neutral Mean 5.4 5.9 5.3 5.5 NS S.D. 1.3 0.9 1.1 1.0 Significance levels for Wilcoxin Signed Ranks Test are shown for comparisons between Phases 1 and 5 for each expression. Table options Because our primary interest was to test whether or not patients increased their ability to generate expressions between Phases 1 and 5, we conducted a separate 2 (phase) by 5 (expression) ANOVA on the ratings for patients. The results confirmed a significant main effect of phase, F(1, 18) = 12.81, P < 0.01, and of expression, F(4, 72) = 17.82, P < 0.001 in the patient group. Post-hoc t-tests indicated a significant increase in the anger condition, t(19) = − 2.74, P < 0.013, and a marginally significant trend in the disgust condition, t(18) = − 2.00, P < 0.06. There was no improvement in the happy, sad, or neutral conditions (all P's > 0.05). CPZ equivalents did not correlate with ratings from Phase 1 or 5. 3.3. Identification of facial expression Participants were shown 10 facial expressions (2 faces in each condition) and were asked to select the expression portrayed in the face from among the five emotions (happy, sad, angry, disgust, and neutral). A Mann-Whitney U-test showed that identification of facial expressions for patients (0.69) did not differ significantly from that of controls (0.73), P > 0.05. These data suggest that inaccuracies in expressing emotions occur even when patients can identify expressions. This finding is further confirmed by the lack of correlation between the measures of identification and production. Identification of facial expressions did not correlate with accuracy to express emotions for either patients (Pearson's r = −0.01, P > 0.05) or controls (Pearson's r = − 0.58, P > 0.05), based on ratings from self-generated expressions in Phase 1. 3.4. Imitation of isolated facial movements In order to rule out the possibility that patients could not imitate expressions due to gross motor impairments in the face, a subset of eleven patients was asked to imitate the following five movements, one at a time: open mouth, squint eyes, raise eyebrows, and pucker lips. The patients obtained a score of 0 (could not perform) or 1 (could perform). The number of patients imitating the movements accurately is as follows: 11/11 for open mouth, 11/11 for squint eyes, 9/11 for raise eyebrows, and 11/11 for pucker lips. These findings suggest that patients could perform isolated facial movements in response to instructions to do so.