دانلود مقاله ISI انگلیسی شماره 37777
عنوان فارسی مقاله

تشخیص حالت چهره احساسی و غیراحساسی: مقایسه بین سندرم ویلیامز و اوتیسم

کد مقاله سال انتشار مقاله انگلیسی ترجمه فارسی تعداد کلمات
37777 2009 10 صفحه PDF سفارش دهید محاسبه نشده
خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.
عنوان انگلیسی
Recognition of emotional and nonemotional facial expressions: A comparison between Williams syndrome and autism
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Research in Developmental Disabilities, Volume 30, Issue 5, September–October 2009, Pages 976–985

کلمات کلیدی
سندرم ویلیامز - اختلال اوتیسم - هیجانی - شناخت اجتماعی
پیش نمایش مقاله
پیش نمایش مقاله تشخیص حالت چهره احساسی و غیراحساسی: مقایسه بین سندرم ویلیامز و اوتیسم

چکیده انگلیسی

Abstract The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social behaviour and social cognition. Twelve participants with WS (from 6;1 to 15 years) and twelve participants with autism (from 4;9 to 8 years) were matched on verbal mental age. Their performances were compared with those of twelve typically developing controls matched on verbal mental age (from 3;1 to 9;2). A set of five tasks assessing different dimensions of emotional and nonemotional facial recognition were administered. Results indicated that recognition of emotional facial expressions is more impaired in Williams syndrome than in autism. Our study comparing Williams syndrome and autism over a small age range highlighted two distinct profiles which call into question the relationships between social behaviour/cognition and emotion perception.

نتیجه گیری انگلیسی

. Results Prior to the parametric analysis, we performed Levene's test to check the equality of variance in our results. The test was not significant for any of the tasks, and all the results were subjected to an analysis of variance (ANOVA). 2.1. Processing of facial information 2.1.1. Facial discrimination task Firstly, the “proportion of correct answers” dependent variable was processed in a repeated-measures ANOVA, with Group as a between-groups factor (3 levels: WS, AUT and VMA) and Gender as a within-subjects factor (2 levels: male and female). We did not observe any Group effect, but there was an effect of Gender (F(1, 33) = 7.67, p < .01) indicating that male faces were more easily identified than female faces. We also observed a Group × Gender interaction effect (F (2, 33) = 3.96, p < .05) (see Fig. 1). This analysis was supplemented with pairwise comparisons. We found fewer correct answers when the face was that of a female than when it was that of a male, but only for the AUT group (F (1, 33) = 14.98, p < .001). Proportion of correct answers for each group (WS, AUT and VMA) and each ... Fig. 1. Proportion of correct answers for each group (WS, AUT and VMA) and each condition (male/female) in the facial discrimination task; **p < .01. Figure options 2.1.2. Facial movement task The “proportion of correct answers” dependent variable (see Fig. 2) was processed in a repeated-measures ANOVA, with Group as a between-groups factor (3 levels: WS, AUT and VMA) and Movement as a within-subjects factor (2 levels: closed eyes and open eyes). We observed a Group effect (F (2, 33) = 3.42, p < .05). This analysis was supplemented with pairwise comparisons. We found fewer correct answers in the WS group than in the VMA group (F (1, 33) = 6.73, p < .05). We also observed a Movement effect (F (1, 33) = 27.56, p < .0001), with fewer correct answers in the “Open eyes” condition than in the “Closed eyes” one. Proportion of correct answers for the facial movement task for each group and ... Fig. 2. Proportion of correct answers for the facial movement task for each group and each condition; *p < .05. Figure options 2.2. Emotional tasks 2.2.1. Labelling task The “proportion of correct answers” dependent variable (see Table 2) was processed in a one-way ANOVA, with the Group factor (3 levels: WS, AUT and VMA). We do not observe any Group effect. We then processed the same variable in a one-way ANOVA, again with the Group factor (3 levels: WS, AUT and VMA), for each separate emotion (anger, happiness, sadness, fear, surprise and neutral). We only observed a Group effect for happiness (F (2, 33) = 7.28, p < .01). The WS group produced fewer correct answers than the AUT group (F (1, 33) = 12.21, p < .01) which, in turn, produced more correct answers than the VMA group (F (1, 33) = 9.57, p < .01). Table 2. Proportion of correct answers in each task, for each group and each emotion. Labelling task Emotion Matching task Emotion Identification task WS AUT VMA WS AUT VMA WS AUT VMA Anger 0.72 0.88 0.92 0.47 0.57 0.53 0.78 0.79 0.90 Happiness 0.47 0.97 0.52 0.77 0.91 0.93 0.78 0.90 0.94 Fear 0.42 0.61 0.58 0.68 0.82 0.76 0.66 0.83 0.86 Surprise 0.36 0.5 0.58 0.69 0.89 0.81 0.54 0.68 0.77 Sadness 0.47 0.67 0.67 0.48 0.57 0.74 0.63 0.82 0.95 Neutral 0 0.08 0.08 – – – – – – Mean 0.41 0.62 0.56 0.62 0.75 0.75 0.68 0.80 0.88 Table options The “mean number of confusions” dependent variable was processed in an ANOVA with Group as a between-groups factor (3 levels: WS, AUT and VMA) for each type of confusion. We observe a Group effect for the confusion between neutral and fear (F (2, 33) = 3.75, p < .05). The AUT group confused these two expressions more often than the WS group (F (1, 33) = 7.46, p < .05). We observed a Group effect for the confusion between fear and anger (F (2, 33) = 3.32, p < .05). The WS group confused these two expressions more often than the AUT group (F (1, 33) = 6.21, p < .05). Lastly, we observed a Group effect for the confusion between surprise and neutral (F (2, 33) = 4.64, p < .05). The AUT group confused these two expressions more often than either the WS group (F (1, 33) = 8.68, p < .01) or the VMA group (F (1, 33) = 4.59, p < .05). 2.2.2. Emotion matching task First of all, the “proportion of correct answers” dependent variable (see Table 2) was processed in a one-way ANOVA, with the Group factor (3 levels: WS, AUT and VMA). We did not observe any Group effect. We then processed the same variable in a one-way ANOVA, again with the Group factor (3 levels: WS, AUT and VMA), for each separate emotion (anger, happiness, sadness, fear and surprise). We only observed a Group effect for sadness (F (2, 33) = 4.84, p < .05). This analysis was supplemented with pairwise comparisons. There was no difference between the WS and AUT groups, which both provided fewer correct answers than the VMA group (F (1, 33) = 9.34, p < .01 and F (1, 33) = 4.15, p < .05 respectively). Secondly, the “mean number of confusions” dependent variable was processed in an ANOVA with Group as a between-groups factor (3 levels: WS, AUT and VMA) for each type of confusion. We observed a Group effect (F (2, 33) = 3.67, p < .05) for the confusion between surprise and happiness. This analysis was supplemented with pairwise comparisons indicating that the WS group confused surprise and happiness more often than either the AUT group (F (1, 33) = 5.5, p < .05) or the VMA group (F (1, 33) = 5.5, p < .05). 2.2.3. Emotion identification task The “proportion of correct answers” dependent variable (see Table 2) was processed in a one-way ANOVA, with the Group factor (3 levels: WS, AUT and VMA). We observed a Group effect (F (2, 33) = 5.74, p < .01). This analysis was supplemented with pairwise comparisons. We observed fewer correct answers in the WS group than in the AUT and VMA groups (F (1, 33) = 4.20, p < .05 and F (1, 33) = 11.31, p < .01 respectively). We then processed the same variable in a one-way ANOVA, again with the Group factor (3 levels: WS, AUT and VMA), for each separate emotion (anger, happiness, sadness, fear and surprise). We observed a Group effect for fear (F (2, 33) = 3.86, p < .05), indicating that the WS group produced fewer correct answers than the AUT and VMA groups (F (1, 33) = 4.74, p < .05 and F (1, 33) = 6.68, p < .05 respectively). We also observed a Group effect for sadness (F (2, 33) = 10.66, p < .001), indicating that the WS group produced fewer correct answers than the AUT and VMA groups (F (1, 33) = 7.13, p < .05 and F (1, 33) = 21.14, p < .001 respectively). The “mean number of confusions” dependent variable was processed in an ANOVA, with Group as a between-groups factor (3 levels: WS, AUT and VMA), for each type of confusion (see Fig. 3). We observed a Group effect for the confusion between “surprise and sadness (F (2, 33) = 3.85, p < .05). The WS group produced this confusion more often than the AUT group (F (1, 33) = 7.42, p < .05). We also observed a Group effect for the confusion between sadness and anger (F (2, 33) = 4.05, p < .05). Both the WS and AUT groups produced this confusion more often than the VMA group (F (1, 33) = 4.44, p < .05 and F (1, 33) = 7.34, p < .05 respectively). Lastly, we observed a Group effect for the confusion between sadness and fear (F (2, 33) = 6.75, p < .05). The WS group produced this confusion more often than either the AUT group (F (1, 33) = 11.55, p < .01) or the VMA group (F (1, 33) = 8.48, p < .01). Mean number of confusions for each type of confusion and for each group in the ... Fig. 3. Mean number of confusions for each type of confusion and for each group in the emotion identification task; *p < .05. Figure options 2.2.4. Task comparison For each emotion, the “proportion of correct answers” dependent variable was processed in a repeated-measures ANOVA, with Group as a between-groups factor (3 levels: WS, AUT and VMA) and Task as a within-subjects factor (3 levels: labelling task, emotion matching task, emotion identification task). We did not observe any Group effect for anger (see Table 2), although we did observe an effect of Task (F (2, 66) = 21.74, p < .0001). Anger was more difficult to identify in the emotion matching task than in the emotion identification and labelling tasks (F (1, 33) = 33.04, p < .001 and F (1, 33) = 27.80, p < .0001 respectively). For happiness (see Table 2), we observed a Group effect (F (2, 33) = 6.82, p < .01), a Task effect (F (2, 66) = 11.22, p < .001) and a Group × Task interaction effect (F (4, 66) = 4.82, p < .01). This interaction indicates that happiness was more easily recognized in the emotion matching and identification tasks than in the labelling task, especially for the WS group (F (1, 33) = 7.75, p < .01 and F (1, 33) = 8.03, p < .01 respectively) and the VMA group (F (1, 33) = 13.86, p < .001 and F (1, 33) = 14.85, p < .001 respectively). This indicates that, for the WS and VMA groups, the presence of a verbal indicator facilitated emotion recognition, whereas performances bythe AUT group werenot influenced by such clues. We did not observe any Group effect for fear (see Table 2), although we did observe a Task effect (F (2, 66) = 6.55, p < .01). Fear was more difficult to identify in the labelling task than in the emotion matching and identification tasks (F (1, 33) = 5.71, p < .05 and F (1, 33) = 10.36, p < .01 respectively). We did not observe any Group effect for surprise (see Table 2), although we did observe a Task effect (F (2, 66) = 10.48, p < .0001). Surprise was more difficult to identify in the labelling task than in the emotion matching and identification tasks (F (1, 33) = 7.13, p < .05 and F (1, 33) = 6.42, p < .05 respectively). We also found that surprise was harder to recognize in the emotion identification task than in the emotion matching task (F (1, 33) = 14.86, p < .001). We observed a Group effect for sadness (see Table 2) (F (2, 33) = 9.02, p < .001), indicating that the WS group provided fewer correct answers than either the AUT group (F (1, 33) = 6.54, p < .05) or the VMA group (F (1, 33) = 17.78, p < .001). We also observed a Task effect (F (2, 66) = 8.27, p < .001). Sadness was more difficult to identify in the labelling task than in the emotion identification task (F (1, 33) = 10.43, p < .01), and harder to recognize in the emotion matching task than in the emotion identification task (F (1, 33) = 31.31, p < .001)

خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.