چگونه به خوبی افراد جوان مبتلا به اختلال آسپرگر می توانند تهدید و یادگیری موثر در مورد چهره را تشخیص دهند؟: یک مطالعه مقدماتی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|31254||2010||7 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Research in Autism Spectrum Disorders, Volume 4, Issue 2, April–June 2010, Pages 242–248
The abilities to identify threat and learn about affect in facial photographs were compared between a non-autistic university student group (NUS), a matched Asperger's group (MAS) on the Standard Progressive Matrices (SPM), and an unmatched Asperger's group (UAS) who scored lower on the SPM. Participants were given pairs of faces and asked which person looked more dangerous. In addition, they engaged in explicit learning of the facial affect features. This study indicated (a) that the ability to identify threat in faces was intact in the MAS group, but poor in the UAS group; (b) a graded degree of performance in facial affect recognition (NUS > MAS > UAS); (c) that all groups improved their facial affect recognition after the brief explicit teaching intervention. This pilot study obtained first indications for the feasible assessment of the ability to distinguish threat in faces and potential remediation effects of the brief explicit teaching of facial emotions for young people with Asperger's disorder.
In this pilot study we examine two aspects of facial perception in young people with Asperger's disorder. First, we examine their understanding of threat in faces. Second, we examine their recognition of basic emotions such as anger, sadness and fear, and their ability to gain in emotion recognition from training. Thus, our interest is in the ability of young people with Asperger's disorder to understand the meaning conveyed by facial features as a basis for potential neuropsychological assessment and intervention. Father of the Latin Church, Saint Jerome (374–419 AD) stated that “the face is the mirror of the mind, and eyes without speaking confess the secrets of the heart.” However, the ability to use the eyes to infer mental states is not as readily available to individuals with autistic spectrum disorder (ASD), including Asperger's disorder (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001). They do not seem to see the mind in the face as clearly as others do, which can interfere with reciprocal communication and social interactions (American Psychiatric Association, 1994). One important social insight that has been linked to mental state understanding is the ability to perceive threat in faces. This ability seems to be poor in certain populations. For instance, Adolphs, Tranel, and Damasio (1998) asked adults to rate face-only photographs for their approachability and trustworthiness, and found that adults with bi-lateral amygdala lesions were less likely than normal controls to distinguish between typical high-threat faces and lower threat faces. Likewise, the amygdala has also been shown to play a central role in trustworthiness judgments in studies using fMRI (Engell et al., 2007 and Todorov et al., 2008). Moreover, people make judgments about trustworthiness very quickly, with judgments after 100 ms of exposure to a face being very similar to judgments for 1000 ms (Willis & Todorov, 2006). Using a subset of these stimuli, Ruffman, Sullivan, and Edge (2006) demonstrated that normal control adults rated unsmiling males as most dangerous and smiling females as least dangerous, and found that older adults did not distinguish between the faces to the same extent as young adults. Similar stimuli have also been given to adults with autism. Adolphs, Sears, and Piven (2001) found that individuals with autism tended to rate all faces more positively compared to normal controls and in this respect were similar to individuals with amygdala lesions. In sum, aging, amygdala lesions, and autism seem to lower the ability to recognize threat in faces. None of the individuals featured in the photographs of the studies described above were known to be a genuine threat; some just fit the conventional stereotype of “threatening” better than others. Recently, however, Ruffman, Slade, O’Brien, Alder, and Taumoepeau (submitted) presented normally developing children with face pairs consisting of a layman and a male murderer on death row convicted of a premeditated, willful murder rather than an act of passion. Participants were asked which one looks more dangerous. The facial expression of each individual was ostensibly neutral and non-smiling, with only subtle differences in facial characteristics. Ruffman et al. argued that a deepening understanding of mental states would help children give meaning to different facial expressions as indicative of a malicious intent. In line with expectations, they found that children's ability to distinguish between murderers and laypersons correlated with their theory of mind (in one experiment with 3- to 4-year-olds’ false belief performance, and in a second experiment with 5- to 6-year-olds’ ability to discern complex mental states in the eyes), and did so independently of another cognitive measure, vocabulary. These data suggest that it is possible to differentiate photographs of death row murderers from those of laypeople, and the ability to do so correlates with mental state understanding as indexed in various theory of mind tasks. In the present study, we examined whether individuals with Asperger's disorder could differentiate such persons as well as individuals without autism. We expected that adults with Asperger's disorder would have difficulty because they had difficulty differentiating high and low threat faces in the study of Adolphs et al. (2001) described above. Petersilia (2001) also suggested that individuals with autistic disorder and Asperger's disorder were at high risk of becoming crime victims, possibly because of their difficulties in perceiving threat in others’ faces. Second, we investigated facial affect recognition. Despite the extensive descriptions of impoverished facial expressions in ASD that have appeared in the literature since Kanner (1943) and Asperger (1944), research into facial affect recognition in ASD was initiated only relatively recently by Hobson (1986). In the two decades since then, the accumulating research findings have remained controversial. Some studies have demonstrated that individuals with ASD are inferior to controls in recognizing facial affect (e.g., Grossman et al., 2000, Szatmari et al., 1990 and Tantam et al., 1989), whereas others have indicated no difference (e.g., Davies et al., 1994, Gepner et al., 1996 and Ozonoff et al., 1990). The reasons for the differential findings seem to derive from the nature of the facial stimuli, as well as the way in which control individuals are matched to ASD individuals (Miyahara, Bray, Tsujii, Fujita, & Sugiyama, 2007). While controversy remains over the differential ability for facial affect recognition in ASD, researchers have employed various types of facial stimuli to investigate the trainability of facial affect recognition in ASD children (Hadwin, Baron-Cohen, Howlin, & Hill, 1996), adolescents (Silver & Oakes, 2001), and adults (Golan and Baron-Cohen, 2006). For instance, Hadwin et al. (1996) taught 10 children with ASD to recognize the photographed facial expressions of happiness, sadness, anger and fear using a question–answer format with corrective feedback. They also taught children about the situations, desires, and beliefs that could cause different emotions. The accuracy of the children's facial affect recognition improved significantly after eight training days. Silver and Oakes (2001) also included a task of facial affect recognition in their computer intervention designed to teach emotional recognition and prediction. On the facial affect recognition task, 11 adolescents with ASD were asked to match a series of 10 photographed facial expressions with a button labeled “angry”, “afraid”, “sad” or “happy”. After 10 days of intervention, correct identification of the emotions increased in number, but the improvement was not statistically significant. To explain the non-significant improvement, the authors questioned the validity of the facial stimuli in their study. Golan and Baron-Cohen (2006) used a computerized multimedia program to teach emotions and mental states in which facial affect was presented in video clips along with the definitions of each emotional expression. The emotional definitions did not include the descriptions of different facial features for each expression. Adults at the high end of the ASD improved significantly in their ability to match the video clips to one of four emotion adjectives after 10 to 15 weeks of the computer intervention. In summary, the intervention outcomes of these programs which include facial affect recognition are inconsistent, with success sometimes occurring but not always. The three intervention studies have some aspects in common. All included tasks of facial affect recognition within larger intervention programs that also taught more complex dimensions of emotions and social skills over several days. The long intervention periods and the inclusion of other complex aspects of emotion make it difficult to identify what parts of the intervention programs helped the participants to learn how to recognize facial affect per se. Moreover, the teaching strategies were limited to matching of faces and labels with corrective feedback ( Hadwin et al., 1996 and Golan and Baron-Cohen, 2006) or corrective feedback plus verbal cues ( Silver & Oakes, 2001). There are different methods possible for teaching facial expressions. Previous studies have used verbal labels to identify facial expressions in the training phase. In the present study, we told participants which distinct patterns of facial musculature corresponded to particular emotions. Given that individuals with ASD excel in visual-verbal processing over visual-affective processing (Grossman et al., 2000), it may be useful to teach specific facial features that are particular to different emotions didactically by associating visual characteristics of facial expressions with their verbal labels. Furthermore, focusing ASD individuals on specific facial features seems particularly important given that they have different face processing strategies (Rondan, Gepner, & Deruelle, 2003), look more at mouths than eyes compared to individuals without autism (Klin, Jones, Schultz, Volkmar, & Cohen, 2002), and are worse discerning emotion from the eyes than individuals without autism (Joseph & Tanaka, 2002). To summarize, there were two aims to our study. First, we examined understanding of threat in Asperger's disorder by asking participants to say whether a layperson or a convicted murderer on death row looked more dangerous. Second, we examined facial affect recognition and learning by teaching participants about the specific patterns of facial musculature that are associated with different emotions using a set of validated facial affect stimuli.