ادراک احساسات از حالات صورت در بزرگسالان با عملکرد بالا مبتلا به اوتیسم
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37799||2012||7 صفحه PDF||سفارش دهید||5802 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Neuropsychologia, Volume 50, Issue 14, December 2012, Pages 3313–3319
Abstract Impairment in social communication is one of the diagnostic hallmarks of autism spectrum disorders, and a large body of research has documented aspects of impaired social cognition in autism, both at the level of the processes and the neural structures involved. Yet one of the most common social communicative abilities in everyday life, the ability to judge somebody's emotion from their facial expression, has yielded conflicting findings. To investigate this issue, we used a sensitive task that has been used to assess facial emotion perception in a number of neurological and psychiatric populations. Fifteen high-functioning adults with autism and 19 control participants rated the emotional intensity of 36 faces displaying basic emotions. Every face was rated 6 times—once for each emotion category. The autism group gave ratings that were significantly less sensitive to a given emotion, and less reliable across repeated testing, resulting in overall decreased specificity in emotion perception. We thus demonstrate a subtle but specific pattern of impairments in facial emotion perception in people with autism.
. Introduction Impaired social communication is one of the hallmarks of autism spectrum disorders (ASD). It is commonly thought that people with ASD are impaired also in a specific aspect of social communication, the recognition of basic emotions from facial expressions (i.e., happiness, surprise, fear, anger, disgust, sadness). However, the literature on this topic offers highly conflicting findings to date: whereas some studies find clear impairments in facial affect recognition in autism (Ashwin et al., 2006, Corden et al., 2008, Dziobek et al., 2010, Law Smith et al., 2010, Philip et al., 2010 and Wallace et al., 2011), others do not (Adolphs et al., 2001, Baron-Cohen et al., 1997, Neumann et al., 2006 and Rutherford and Towns, 2008). Part of this discrepancy may be traced to the known heterogeneity of ASD, together with differences in the stimuli and tasks used in the various studies; and part may derive from the specific aspects of facial emotion perception that were analyzed in the studies. A recent and comprehensive review attempted to make sense of this mixed literature (Harms, Martin & Wallace, 2010). The authors suggest that the ability of individuals with an ASD to identify facial expressions depends, in large part, upon several factors and their interactions, including demographics (i.e., subjects' age and level of functioning), the stimuli and experimental task demands, and the dependent measures of interest (e.g., emotion labeling accuracy, reaction times, etc.). Other factors, such as ceiling effects or the use of compensatory strategies by individuals with an ASD, might also obscure true group differences that would have been otherwise found. The authors further make the interesting point that other behaviorally- or biologically-based measures almost invariably demonstrate that individuals with ASDs process faces differently, so perhaps previous studies of facial affect recognition which failed to find group differences used tasks and/or measures that are simply not sensitive enough to detect group differences. Difficult or unfamiliar tasks are more likely to reveal impairment, since they are better able to avoid ceiling effects and, in some cases, are less well-rehearsed and preclude compensatory strategies. Two distinct methodological approaches have been used to achieve these goals of providing sensitive measures of facial affect recognition. One approach has been to manipulate the stimuli in some way, such as with facial morphing (e.g., Humphreys et al., 2007, Law Smith et al., 2010 and Wallace et al., 2011). This approach gives the experimenter parametric control of the intensity of the stimuli, and so can assess emotion discrimination at a fine-grained level, but with the important caveat that the morphs are artificially generated and not necessarily the same as the subtle expressions that one would encounter in the real world. The second main approach is to modify the task demands (e.g., changing task instructions, reducing the length of stimulus presentation, etc.), rather than manipulating the stimuli in any way. By doing so, the experimenter can increase the task difficulty and reduce the likelihood that an explicit, well-rehearsed cognitive strategy is used for decoding the expression, while still using naturalistic stimuli. Here, we took this latter approach. We used a well-validated and widely used set of facial emotion stimuli (Paul Ekman's Pictures of Facial Affect; Ekman, 1976) and obtained detailed ratings of emotion. An additional motivation for using these stimuli is that they provide continuity with a number of prior studies in a wide variety of populations including ASD (Adolphs et al., 2001), patients with brain lesions (Adolphs et al., 1995 and Adolphs et al., 2000), frontotemporal dementia (Diehl-Schmid et al., 2007), Parkinson's disease (Sprengelmeyer et al., 2003), and depression (Persad & Polivy, 1993). Given that facial expressions are complex and are often comprised of varying degrees of two or more emotions in the real world, participants were asked to determine the intensity levels of each of the 6 basic emotions for every emotional face they were shown (e.g., rate a surprised face on it is intensity (i.e., degree) of happiness, surprise, fear, anger, disgust, and sadness, etc.). In keeping with previous descriptions of this task (e.g., Adolphs et al., 1994,1995), we refer to it as an emotion recognition task, since it requires one to recognize (and rate) the level of a particular emotion displayed by a face. For instance, for one to rate a surprised face as exhibiting a particular intensity of fear requires recognizing that emotion, fear, in the first place. Given that participants are unlikely to have practiced this task during any sort of behavioral intervention they may have been exposed to, we expected this task to reveal group differences, particularly in the overall intensity ratings and the degree of response selectivity (i.e., tuning or sharpness) for particular emotional facial expressions. We also assessed test-retest reliability in a subset of our study sample, to explore whether a less stable representation of emotional expression would be reflected in increased response variability across these testing sessions.
نتیجه گیری انگلیسی
. Results A comprehensive plot of the data is given in Fig. 2, which reveals several findings. Overall, at the mean group level, the pattern of ratings on the different emotion labels across all the different facial emotion stimuli were highly correlated between ASD and control groups (mean Pearson's r across all stimuli=0.97, p<0.00001; see Fig. 2). Both groups showed a similar pattern in which they assigned the highest intensity for concordant ratings, and displayed similar patterns of “confusion” for particular emotions (e.g., fear-surprise; disgust-anger), though in this context confusion does not necessarily mean the judgment was incorrect. Happy faces were rated with the greatest selectivity in both subject groups ( Fig. 2). Neutral faces were rated similarly by the two groups across the 6 emotion judgments [(F(1,32)=0; n.s.)], nor was there a Group×Emotion Judgment interaction [F(5,160)=0.26, p=0.94]. (A) Autism and control group rating matrices. The intended facial expression is ... Fig. 2. (A) Autism and control group rating matrices. The intended facial expression is given on the y-axis, and the emotion rating category is given on the x-axis. Concordant ratings fall along the diagonal, and discordant ratings fall along the off-diagonal. (B) The difference between autism and control groups. No individual cell survives FDR correction for multiple comparisons (q<0.05). Figure options Despite these similarities between subject groups, we also found several important differences. Visual inspection of the pattern of data shown in Fig. 2 suggests reduced selectivity in the ASD group, which was confirmed with a significant main effect of Group [F(1,32)=6.74, p=0.014]. As described above, selectivity is defined for each facial expression as the difference between the concordant and discordant ratings (i.e., the cell that falls on the diagonal minus the non-diagonals in each row). Although there was also a main effect of Emotion [F(5,160)=58.01, p<0.0001], there was no Group×Emotion interaction [F(5,160)=0.65, p=0.48]. Across the 6 facial emotions, the mean selectivity in the control group was 4.08 (SD=0.83), while the mean selectivity in the ASD group was 3.34 (0.81) (see Fig. 3A). Furthermore, selectivity was not significantly associated with full-scale IQ in the ASD group [r=0.13, p=0.66]. To ensure that these findings could not be accounted for by possible differences in the range of the rating scale used between groups, we z-scored the data with respect to each participant's distribution of ratings so that all subjects' ratings were on a comparable scale. To do so, we subtracted each participant's mean rating (across all the faces) from their rating for a given face, divided by their standard deviation (across all the faces). Even after this z-score normalization, the above described main effect remained significant [F(1,32)=7.05, p=0.012]. Scatterplots of emotion selectivity, concordant intensity, and discordant ... Fig. 3. Scatterplots of emotion selectivity, concordant intensity, and discordant intensity ratings. These plots represent data that are collapsed across all 6 emotion categories. Figure options To further visualize this reduced selectivity between the emotions exhibited by the autism group, we used a method known as non-metric Multidimensional Scaling (MDS). To carry out this analysis, individual subject similarity matrices were derived by calculating the correlations between the 6 ratings given for each face with the 6 ratings given for every other face (see Adolphs et al., 1994). Next, we fisher z-transformed these correlation matrices, averaged them together for each group separately, converted back to correlation coefficients with the inverse fisher z transformation, and then transformed them into dissimilarity matrices by subtracting the result from 1. MDS then determined the Euclidean distance (corresponding to the perceived similarity) between the 39 faces in 2 dimensions (see Fig. 4), which was chosen following visual inspection of a scree plot showing that 2 dimensions captured most of the variance. The amount of MDS stress, which provides a goodness-of-fit measure of the MDS result, was similar between groups (ASD=0.097; controls=0.094). However, one can observe that the ASD group shows less separation between the different emotion categories, confirming the above finding of reduced emotion selectivity. Multidimensional scaling (MDS) of perceived similarity of the emotional ... Fig. 4. Multidimensional scaling (MDS) of perceived similarity of the emotional expression in faces. Each colored dot represents a single Ekman face (n=39), colored according to the intended expression of the face (happy=cyan; surprised=green; afraid=red; angry=yellow; disgust=orange; sad=magenta; neutral=gray). The Euclidian distance between points represents their perceived similarity. In both groups, the faces belonging to a particular emotion category generally cluster near one another, although the autism group has less separation overall between the different emotion categories (mean pairwise distance between points in the ASD group=0.814; mean pairwise distance in the control group=0.923). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Figure options Selectivity, as defined above, could be a consequence of two distinct factors, or a combination of both. First, reduced selectivity might result from reduced intensity ratings for the concordant emotion, or second, it might be a consequence of increased “confusion,” (i.e., higher intensity ratings for the discordant emotions), neither of which are mutually exclusive. Therefore, we ran a multivariate repeated-measures ANOVA, which revealed no main effect of Group [F(1,32)=0.53, p=0.47] but a significant interaction between the two dependent variables (concordant and discordant intensity) and Group [F(1,160)=6.74, p=0.014]. To further explore the contribution of each of these variables to specificity, we next ran 2 univariate 2×6 repeated-measures ANOVAs. In terms of concordant intensity, there was a possible trend toward a significant main effect of Group (ASD/controls) [F(1,32)=2.45, p=0.127], as well as a significant main effect of Emotion [F(5,160)=3.20, p=0.0088], but the interaction was not significant [F(5,160)=0.74, p=0.59], suggesting possibly overall reduced intensity ratings for concordant emotion judgments in individuals with an ASD (mean=2.72, SD=0.56) compared to the control group (mean=2.97, SD=0.37) ( Fig. 3B). Second, in terms of discordant ratings, there was a trend toward a significant main effect of Group (ASD/control) [F(1,32)=3.51, p=0.07], there was a main effect of Emotion [F(5,160)=115.93, p<0.0001] and a trend toward a Group×Emotion interaction [F(5,160)=1.97, p=0.086]. The mean intensity rating for discordant emotion judgments (i.e., confusion) in the ASD group (mean=−0.62, SD=0.60) was higher than the control group (mean=−1.11, SD=0.85) ( Fig. 3C). Post-hoc tests carried out for each emotion separately revealed higher discordant ratings in the ASD group for happy faces [t(32)=2.69, p=0.01, uncorrected], surprised faces [t(32)=2.71, p=0.01, uncorrected], and fear faces [t(32)=2.09, p=0.046, uncorrected]. Therefore, the reduced selectivity in ASD appears to be a consequence of both reduced concordant ratings as well as increased discordant ratings (perhaps especially for happy, surprise, and fear). To explore whether the lower selectivity we found in the ASD group might in fact result from lower reliability, we next examined test-retest reliability in a small subset of participants who completed the entire task a second time (ASD n=6; control n=8). The dependent measure was the difference between the rating given in the first session and the rating given in the second session, collapsed across all stimuli and ratings. To fully understand whether reliability might be influenced by the magnitude of a rating given in the first place, we carried out a 2 (group; ASD/control) by 9 (initial rating, from −4 to +4) ANOVA, using the raw (non-z-scored) absolute value of the difference between the test and retest. Although there was no overall main effect of Group [F(1,12)=0.10, p=0.76], there was a significant effect of Initial Rating [F(8,124)=2.87, p=0.007] and a significant Group×Initial Rating interaction [F(8,124)=2.47, p=0.018]. We next grouped the data into high-intensity initial ratings (i.e., −4, −3, +3, +4) and low-intensity initial ratings (i.e., −2, −1, 0, +1, +2). Compared to controls, individuals with ASD had less reliable ratings for those faces that they initially rated as more emotionally intense [t(12)=3.29, p=0.006] whereas they did not differ from controls in reliability for faces they rated as initially less intense [t(12)=0.88, p=0.40] ( Fig. 5). The mean rating change for high intensity ratings in the ASD group was 1.19 (SD=0.30) and in the control group was 0.80 (SD=0.12). The mean rating change for low intensity ratings in the ASD group was 1.18 (SD=0.42) and in the control group was 1.39 (SD=0.43). Test-retest reliability. The autism group exhibited less consistent ratings ... Fig. 5. Test-retest reliability. The autism group exhibited less consistent ratings across a second identical testing session for high intensity ratings (i.e., −4, −3, +3, +4). The y-axis represents the mean of the absolute value of the difference between session 1 and session 2, and this value is plotted as a function of the rating level given in session 1 (x-axis).