دانلود مقاله ISI انگلیسی شماره 37949
ترجمه فارسی عنوان مقاله

تطبیق وجهی ابراز هیجانی در کودکان مبتلا به اوتیسم جوان

عنوان انگلیسی
Intermodal matching of emotional expressions in young children with autism
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
37949 2008 10 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Research in Autism Spectrum Disorders, Volume 2, Issue 2, April–June 2008, Pages 301–310

ترجمه کلمات کلیدی
اوتیسم - تطبیق وجهی - ابراز هیجان
کلمات کلیدی انگلیسی
Autism; Intermodal matching; Emotional expression
پیش نمایش مقاله
پیش نمایش مقاله  تطبیق وجهی ابراز هیجانی در کودکان مبتلا به اوتیسم جوان

چکیده انگلیسی

Abstract This study examined the ability of young children with autism spectrum disorders (ASD) to detect affective correspondences between facial and vocal expressions of emotion using an intermodal matching paradigm. Four-year-old children with ASD (n = 18) and their age-matched normally developing peers (n = 18) were presented pairs of videotaped facial expressions accompanied by a single soundtrack matching the affect of one of the two facial expressions. In one block of trials, the emotions were portrayed by their mothers; in another block of trials, the same emotion pairs were portrayed by an unfamiliar woman. Findings showed that ASD children were able to detect the affective correspondence between facial and vocal expressions of emotion portrayed by their mothers, but not a stranger. Furthermore, in a control condition using inanimate objects and their sounds, ASD children also showed a preference for sound-matched displays. These results suggest that children with ASD do not have a general inability to detect intermodal correspondences between visual and vocal events, however, their ability to detect affective correspondences between facial and vocal expressions of emotions may be limited to familiar displays.

نتیجه گیری انگلیسی

Results First, a 2 × 4 (group × block order) MANOVA comparing overall attention and intermodal preference across orders of block presentations showed neither significant effects of order of stimulus presentation in either group of children nor any interaction between group and order. Thus, we collapsed data across orders. 2.1. Intermodal matching of emotional expressions First, we assessed children's overall visual attention to the videotaped displays. Using the percentage of time children looked at the videotaped events out of total presentation time, we conducted an ANOVA with group as a between-subject variable and familiarity (mother, unfamiliar woman) as a within-subject variable. A significant main effect of group showed that ND children looked at the emotion displays significantly more than ASD children (M = 89%, S.E. = 3% for ND group, and M = 74%, S.E. = 3% for ASD group). However, children with autism attended to the displays for durations sufficient to demonstrate intermodal matching preferences. There was no main effect of familiarity on total looking time, suggesting that children in both the ASD and ND groups maintained the same level of attention to the emotion displays of their mothers and the unfamiliar woman. To assess children's ability to recognize the correspondence between facial and vocal expressions, we computed looking preferences for sound-matched facial expressions using difference scores for familiar and unfamiliar faces. These were calculated as the difference between the percentages of time children looked at sound-matched versus non-sound-matched displays of each emotion. A multivariate repeated measures ANOVA was conducted on the preference scores for sound-matched facial expressions using group (ASD, ND) as a between-subjects variable and familiarity (mother, unfamiliar woman) and emotion (happy, sad, angry) as within-subject variables. The main effect of familiarity was significant (F(1, 34) = 4.48, p < 0.04). The average preference for sound-matched emotions portrayed by mothers was 12% (S.E. = 2%) and 8% (S.E. = 2%) when portrayed by the unfamiliar woman. The two-way interactions were not significant. However, the three-way interaction between group, emotion, and familiarity was significant (F(2, 33) = 4.66, p < 0.01) (see Table 1). Table 1. Mean difference score between preferences to sound-matched vs. non-sound-matched facial expressions of mother and unfamiliar woman by group Condition Emotion Group Autism Normally developing M (S.E.) M (S.E.) Mother (familiar) Happy 0.13 (0.03) 0.13 (0.03) Sad 0.07 (0.02) 0.16 (0.02) Angry 0.11 (0.03) 0.14 (0.03) Total 0.10 (0.02) 0.14 (0.02) Unfamiliar woman Happy 0.02 (0.02) 0.11 (0.02) Sad 0.06 (0.03) 0.11 (0.03) Angry 0.07 (0.03) 0.14 (0.03) Total 0.05 (0.02) 0.12 (0.02) Table options To follow-up the three-way interaction, we conducted two separate 2 × 3 ANOVAs with group (ASD, ND) as a between-subjects variable and emotion (happy, sad, and angry) as a within-subjects variable for familiar and unfamiliar conditions. In the unfamiliar condition, a significant main effect of group (F(1, 34) = 4.01, p < 0.05) showed that the preference for sound-matched emotions among ASD children was significantly lower than among ND children (M = 5%, S.E. = 2%; M = 12%, S.E. = 2% for ASD and ND groups, respectively). However, in the familiar condition, the main effect of group was not significant (F(1, 34) = 1.65, p < 0.21), suggesting that the ASD children showed the same level of preference for sound-matched emotions of mothers as did normally developing age-matched peers. However, a significant group by emotion interaction (F(2, 33) = 4.18, p < 0.02) in the familiar condition was followed by an ANOVA comparing separately the preferences for each of the emotions across groups. There was no difference between the groups in the preferences for sound-matched happy and angry expressions of mothers, but ASD children showed a weaker preference for sound-matched sad expressions of mothers (F(1, 34) = 8.64, p < 0.00). In summary, ASD children were almost as proficient in matching facial and vocal expressions of their mothers as their age-matched peers (10.3% versus 14.5% for ASD and ND groups, respectively), especially for happy and angry displays. However, when the emotions were portrayed by an unfamiliar woman, ASD children were less proficient than their age-matched peers in matching the vocal and facial expressions (5% versus 12% for ASD and ND groups, respectively), especially happy expressions. Furthermore, across conditions and emotions, correlations between child age and preferences for sound-matched events were not significant (r's = 0.20–0.38, p > 0.22), with the exception of one significant correlation between child age and preferences for sound-matched sad expressions of the unfamiliar woman in the ASD group (r = 0.48, p < 0.04). 2.2. Control condition: Inanimate events Children of both groups attended to the physical-inanimate event films for almost the entire presentation time (ASD M = 90%, S.D. = 0.07; ND M = 91%, S.D. = 0.11). Intermodal preferences were calculated by taking the difference between the percentage of time children looked at inanimate events presented with a matching and a non-matching sound. An ANOVA with group as a between-subjects variable using the preference scores for sound-matched displays did not show a difference between the groups (F(1, 34) = 2.90, p < 0.6). ASD children showed an average 7% (S.D. = 0.13%) preference for sound-matched inanimate displays, and ND children showed an average preference of 9% (S.D. = 0.12%). Thus, children in both groups were able to detect intermodal correspondences between visual and auditory events with similar proficiency. Correlations between child age and looking preferences at sound-matched events were not significant in either group (ASD, r = −0.07, p > 0.76; ND, r = −0.20, p > 0.41), suggesting that the ability to match object events and their sounds was consistent across participants in each group.