رمزگشایی عاطفی در حالات صورت، اسکریپت ها و فیلم ها: مقایسه بین کودکان عادی، مبتلا به اوتیسم و آسپرگر
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
37795 | 2012 | 11 صفحه PDF |

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Research in Autism Spectrum Disorders, Volume 6, Issue 1, January–March 2012, Pages 193–203
چکیده انگلیسی
Abstract ASD subjects are described as showing particular difficulty in decoding emotional patterns. This paper explored linguistic and conceptual skills in response to emotional stimuli presented as emotional faces, scripts (pictures) and interactive situations (videos). Participants with autism, Asperger syndrome and control participants were shown facial, pictorial and video representation of six basic emotions (happiness, anger, fear, sadness, surprise, disgust). They were asked to identify the emotion and to individuate possible causes of the emotional state. A semantic analysis was applied to verbal reports, focusing on labeling and conceptualization. Log-linear analyses showed different representations across the participants as a function of emotion, pathology and presentation modality. Autistic participants were able to correctly decode primary emotions while showing difficulties with surprise. In contrast, Asperger participants performance was more similar to control subjects’. Finally, when situational correlates were provided, it was evident a “facilitation effect” for the representation of emotions.
مقدمه انگلیسی
. Introduction Autistic children generally present a dysfunction in the emotional domain, evidenced by the lack of visual contact and attention to the human face. Human face is considered a communicative vehicle which allows individuals to express emotional states and attitudes (Balconi, 2008). Within relational contexts, the ability to correctly and fast decode the emotional expressions of other people might serve the fundamental function of attuning one's own behavior with regards to contextual requirements. In other words, facial expressions represent basic communicative signals with high interpersonal value and adaptive function. Therefore, the inability of autistic children to use face as a social cue to decode emotional dimensions could be ascribed within the frame of their general impairment in social cognition processes (Balconi & Carrera, 2007). In particular, the ability to recognize emotional patterns in face is the result of a selective adaptive process and neural maturation process leading to the creation of specific and specialized neural network for decoding emotional configurations (De Haan, Pascalis, & Johnson, 2002). However, experience also plays a fundamental role in emotional decoding development. In fact, when emotions are not considered as the expression of distinct neural programs, as accounted for by categorial models (Ekman, 1982 and Keltner and Ekman, 2003) but more as interrelated constructs differing on the base of dimensions such as arousal or hedonic valence (Russell, 1997), the decoding of emotions becomes also a function of appraisal processes. Thus far, neural maturation, together with the function that experience has in the selection of cognitive and behavioral responses, contributes to the identification of separated emotional dimensions. Furthermore, it is necessary to differentiate between recognition and labeling. Developmental studies, in fact, showed that in some cases children are able to recognize and differentiate between emotional expressions, whereas their performance is poor when asked to specifically label emotional correlates ( Balconi and Carrera, 2007 and Widen and Russell, 2003). Individuals with autism show a severe impairment of the ability to understand emotional dimension, especially when expressed through mimic facial patterns ( Balconi and Carrera, 2006 and Balconi and Carrera, 2007). Indeed, several studies have found that, for autistic children, facial emotions have a weak salience, if compared with other non-emotional cues ( Celani, Battacchi, & Arcidiacono, 1999). They tend to ignore emotional expressions, unless they are explicitly required to do it. However, the extent of the deficit in facial expression understanding varies ( Gepner, de Gelder, & de Schonen, 1996) with the level of cognitive functioning ( Rojahn, Lederer, & Tassè, 1995) with low-functioning subjects showing severe impairment of emotional decoding and high-functioning subjects giving performances comparable to normal. Another factor modulating autistic performance in emotions decoding is the type of emotion they are asked to recognize. Several empirical findings indicate that the ability to recognize simple emotional configuration, like happiness or anger, is less impaired in comparison to the competence to recognize complex or secondary emotions, like surprise or embarrassment, respectively ( Balconi and Lucchiari, 2005 and Capps et al., 1992). The difficulties in those last cases might be due to the complex relation existing between emotional expression and causal antecedents ( Hillier & Allinson, 2002) which, in order to be reconstructed, require preserved mentalization and meta-representational functions ( Baron-Cohen, Spitz, & Cross, 1993). This difficulty may be partially overcome by making contextual relations more salient, as it happens when emotional correlates are presented in a specific context. Indeed, another variable which should be taken into consideration is the contextual domain where the emotions take place. Since emotional comprehension entails appraisal process and emotional expression does not usually happens in the void, contextual information might determine a consistent part of emotional expression meaning ( Fridlund, 1991). In this light, the meaning of facial display is a function of how the facial features are related to other features of the context. The perception of emotion is a contingent rather than a necessarily automatic mechanism. In other words, it relies upon the ability to decode facial structural patterns and to integrate them with contextual and relational information. Therefore, emotions are recognized through the development and generalization of emotional scripts. These scripts include not only facial expressions, but also the representation of causal factors, physic and social context, actions and their consequences, as well as the subjective experience and the cognitive appraisal of the situation ( Bullock and Russell, 1986 and Russell and Widen, 2002). Among these cues, the representation of causal bonds, that is a set of causal events and of their behavioral consequences, has a remarkable significance, because they constitute the more explicative elements of the emotional experience ( Want & Harris, 2001). In fact, if facial cues per se are fundamental to infer emotions, it also holds true that facial expressions always occur within interactive contexts ( Russell & Widen, 2002). In a recent study ( Balconi & Carrera, 2007) comparing the performance of autistic and normal children to an emotion detection task, using both facial representations and emotional scripts, it was shown that emotional scripts facilitated the conceptualization of emotions in autistic children. The situational component contained in scripts seems to have allowed the activation of a more complex contextual representation, which takes into account the context in which the emotional event happens, the emotional causes, the sequence of actions and their consequences ( Bullock & Russell, 1986). The presence of interactional features that characterize the emotional experience constitutes a facilitation element for emotion comprehension, also producing a better description in the emotion labeling. The present study aims to investigate the lexicalization and conceptualization of emotions taking into account the complexity of emotional correlates and the role of situational elements. Specifically we compared the performance of normal, low-functioning (autistic) and high-functioning (Asperger) children in decoding emotional correlates presented in different modalities: facial expressions, script and video.
نتیجه گیری انگلیسی
Results In order to analyze the effect of pathology (autistic, Asperger vs. controls), stimulus type (facial expression, script, video) and emotion type (five types), a log-linear hierarchical analysis with satured model was applied to each conceptual category (Areni, Ercolani, & Scalisi, 1994). We reported the statistical data relative to the main effects of pathology (3), stimulus type (3) and emotion type (6), and the interaction effects which were significant to the analysis (Table 1). Table 1. Significant effects for log-linear analysis “fit model”. Significant effects Label correctness Pathology × emotion × correctness χ2 (10; n = 15) = 23,486; p = .009 Pathology × correctness χ2 (2; n = 15) = 27,855; p ≤ .001 Emotion × correctness χ2 (5; n = 15) = 36,871; p ≤ .001 Verbal relevance Pathology × emotion × relevance χ2 (10; n = 15) = 22,804; p = .011 Pathology × relevance χ2 (2; n = 15) = 27,205; p ≤ .001 Emotion × relevance χ2 (5; n = 15) = 39,542; p ≤ .001 Verbal content Pathology × content χ2 (6; n = 15) = 23,085; p = .001 Script description Pathology × script χ2 (6;n = 15) = 40,449; p ≤ .001 Emotion × script χ2 (15; n = 15) = 106,594; p ≤ .001 Type χ2 (2; n = 15) = 145,967; p ≤ .001 Eliciting cause Pathology × cause χ2 (6; n = 15) = 51,737; p ≤ .001 Emotion × cause χ2 (15; n = 15) = 108,322; p ≤ .001 Type χ2 (2; n = 15) = 39,987; p ≤ .001 Cause level Pathology × level χ2 (2; n = 15) = 11,036; p ≤ .001 Table options In Table 2 and Table 3 percentage values for each linguistic category are reported. Table 2. Percentage values for each verbal category of labeling. Emotion labeling Disgust Happiness Anger Fear Sadness Surprise Face Script Video Face Script Video Face Script Video Face Script Video Face Script Video Face Script Video Label correctness Autism 0 16 50 66 66 83 83 50 50 50 33 66 50 16 50 0 16 0 Asperger 50 50 75 100 100 100 75 100 25 75 100 100 100 100 100 50 25 25 Control 80 60 80 80 80 80 100 20 40 40 80 70 100 80 80 60 60 40 Verbal relevance Autism 16 66 83 83 83 100 83 50 66 66 33 66 66 83 66 17 33 0 Asperger 50 100 100 100 100 100 75 100 75 100 100 100 100 100 100 50 25 25 Control 80 80 100 100 80 100 100 60 100 60 80 60 100 100 100 100 100 60 Verbal content Autism Emotional state 66 50 83 66 66 83 83 66 66 66 66 66 50 16 50 66 66 100 Mental state 16 0 16 16 16 16 0 16 16 0 16 16 33 33 33 0 0 0 Action 16 33 0 16 16 0 16 16 0 33 16 16 16 16 16 33 33 0 Asperger Emotional state 50 50 75 100 100 100 100 100 25 75 100 100 100 100 100 75 75 100 Mental state 50 50 0 0 0 0 0 0 50 25 0 0 0 0 0 25 25 0 Action 0 0 25 0 0 0 0 0 25 0 0 0 0 0 0 0 0 0 Control Emotional state 80 60 80 80 100 80 100 80 40 60 100 80 80 80 100 60 80 80 Mental state 20 20 0 20 0 20 0 20 40 0 0 20 0 0 0 20 0 0 Action 0 20 20 0 0 0 0 0 20 40 0 0 20 20 0 20 20 20 Table options Table 3. Percentage values for each verbal category of conceptualization. Disgust Happiness Anger Fear Sadness Surprise Script Video Script Video Script Video Script Video Script Video Script Video Situation description Autism Object 66 16 33 0 33 16 33 83 33 33 66 83 Person 16 66 50 50 0 17 50 0 50 50 33 16 Interaction 0 0 0 50 50 50 0 16 0 16 0 0 Asperger Object 0 50 0 0 0 25 75 50 25 0 25 50 Person 100 50 0 25 100 0 25 50 75 100 75 50 Interaction 0 0 100 75 0 75 0 0 0 0 0 0 Control Object 20 0 0 0 0 0 40 0 0 0 70 20 Person 80 100 60 0 40 20 60 100 100 80 40 80 Interaction 0 0 40 100 60 80 0 0 0 20 0 0 Eliciting cause Autism Internal 50 83 16 0 0 0 16.7 0 16 33 50 33 External 33 0 33 0 50 0 50 50 16 66 33 66 Interpersonal 0 16 33 100 33 66 16 16 50 0 0 0 Asperger Internal 50 25 0 0 0 25 0 0 50 50 0 0 External 50 75 0 0 100 0 100 100 50 50 100 100 Interpersonal 0 0 100 100 0 75 0 0 0 0 0 0 Control Internal 80 100 40 20 20 20 60 60 40 60 80 100 External 20 0 0 0 0 0 40 20 20 20 0 0 Interpersonal 0 0 60 80 80 80 0 0 40 20 0 0 Cause level Autism Inferential 50 50 50 33 33 16 66 16 33 66 66 66 Descriptive 33 50 33 66 50 50 33 33 33 33 16 33 Asperger Inferential 50 75 50 50 100 50 100 100 75 100 100 100 Descriptive 50 25 50 50 0 50 0 0 25 0 0 0 Control Inferential 80 80 100 80 60 80 100 60 80 80 60 60 Descriptive 20 20 0 0 40 0 0 20 20 20 20 40 Table options 3.1. Labeling 3.1.1. Label correctness In line with original hypotheses, autistic participants showed impairments in labeling emotional cues, while Asperger participants performance was similar to the one provided by control participants (Fig. 1). With concern to single emotion, autistic participants showed a good ability to label happiness, anger and fear, while still showing scarce ability to label sadness and a severe impairment in labeling disgust and surprise. In contrast, Asperger participants showed a good ability, comparable to control participants’ performance, to label happiness, anger, fear and sadness. However, they revealed an impaired ability to label disgust and surprise (Fig. 2). Frequency of correct labels produced by control, autistic and Asperger subjects. Fig. 1. Frequency of correct labels produced by control, autistic and Asperger subjects. Figure options Frequency of correct labels produced for each emotional dimension by control, ... Fig. 2. Frequency of correct labels produced for each emotional dimension by control, autistic and Asperger subjects. Figure options 3.1.2. Verbal relevance With regards to the global verbal production relative to each emotional content, the verbal production of clinical group resulted less relevant in comparison with the control group. Asperger participants’ performance leveled to the autistic’ performance for most emotional representation (disgust, sadness, surprise) with a worse performance than controls, while resulting worst in case of happiness in comparison with autistic participants. The case of happiness is particularly interesting since autistic participants performed better than Asperger and control participants, indicating this emotion is semantically well represented for the autistic group. Surprise appears to be the emotion which discriminates between clinical and control groups, with a worst performance from the former (Fig. 3 and Fig. 4). Verbal relevance of control, autistic and Asperger subjects. Fig. 3. Verbal relevance of control, autistic and Asperger subjects. Figure options Verbal relevance of control, autistic and Asperger subjects as a function of ... Fig. 4. Verbal relevance of control, autistic and Asperger subjects as a function of emotional dimensions. Figure options 3.1.3. Verbal content All groups referred mainly to emotional states when they were asked to label emotional stimuli. In comparison to the Asperger and control groups, autistic choose more frequently the category of mental state or action description, even if they mostly used the emotional state description. The Asperger group less frequently addressed action during the task (Fig. 5). Verbal content of control, autistic and Asperger subjects’ verbal production. Fig. 5. Verbal content of control, autistic and Asperger subjects’ verbal production. Figure options 3.2. Conceptualization 3.2.1. Script description In describing the emotional cues, the autistic group focused more prominently on inanimated objects and less on the characters or the interaction between characters. The inverse pattern was showed by control participants who focused more on the characters and the interaction and less on the objects. The Asperger group presented a pattern more similar to the control group's (with an increased interest for person), whereas a significant attention was also directed to objects more than control group (Fig. 6). With regards to different emotions, when describing disgust and sadness participants referred more often to the personal characters, while happiness and anger were more often described by making reference to the interaction, and fear and surprise seemed to draw the participants’ attention toward the object. A general better performance was observed for video material than script material in the overall participants (Fig. 7). Attentional focus of control, autistic and Asperger subjects’ script ... Fig. 6. Attentional focus of control, autistic and Asperger subjects’ script descriptions. Figure options Attentional focus of control, autistic and Asperger subjects’ script ... Fig. 7. Attentional focus of control, autistic and Asperger subjects’ script descriptions as a function of the emotional dimension. Figure options 3.2.2. Eliciting cause During this task, control group showed to attribute the eliciting cause to an internal event. In contrast, autistic and Asperger groups attributed more often the cause of an emotional cue to external events. Finally, interpersonal events were recognized as an eliciting cause in prevalence by the autistic and the control group, while the Asperger group rarely referred to the interactive dimension as potentially being cause of the expressed emotion (Fig. 8). Considering separately each emotional dimension, the eliciting cause of disgust was mainly recognized as an internal event. Happiness and anger were more often elicited by interpersonal causes and fear by an external event. Finally, video condition presented a more consistent causal attribution than script condition. Identification of the eliciting cause by control, autistic and Asperger ... Fig. 8. Identification of the eliciting cause by control, autistic and Asperger subjects. Figure options The eliciting cause attribution of sadness and surprise was less homogeneous since participants referred equally to internal and external events as possible causes of these emotions. However, the interpersonal dimension was also listed, even if with lower frequency, as possible cause for sadness (Fig. 9). Identification of the eliciting cause as a function of the emotional dimension. Fig. 9. Identification of the eliciting cause as a function of the emotional dimension. Figure options 3.2.3. Cause level Generally, the control group and the Asperger group individuated the eliciting cause on an inferential level, while the autistic group more frequently explained the causal link by using a descriptive level.