دانلود مقاله ISI انگلیسی شماره 37797
ترجمه فارسی عنوان مقاله

استفاده از مدل سازی ویدئو برای تدریس به کودکان با PDD-NOS برای پاسخ به حالات صورت

عنوان انگلیسی
Using video modeling to teach children with PDD-NOS to respond to facial expressions
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
37797 2012 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Research in Autism Spectrum Disorders, Volume 6, Issue 3, July–September 2012, Pages 1176–1185

ترجمه کلمات کلیدی
احساسات - همدلی - ارتباط چشمی - حالات چهره - چشم انداز برداری - مدل سازی ویدئو
کلمات کلیدی انگلیسی
Emotions; Empathy; Eye contact; Facial expressions; Perspective-taking; Video modeling
پیش نمایش مقاله
پیش نمایش مقاله  استفاده از مدل سازی ویدئو برای تدریس به کودکان با PDD-NOS برای پاسخ به حالات صورت

چکیده انگلیسی

Abstract Children with autism spectrum disorders often exhibit delays in responding to facial expressions, and few studies have examined teaching responding to subtle facial expressions to this population. We used video modeling to train 3 participants with PDD-NOS (age 5) to respond to eight facial expressions: approval, bored, calming, disapproval, disgusted, impatient, pain, and pleased. Probes consisted of showing an adult performing these facial expressions in a video, and we conducted generalization probes across people and settings. Training was showing a video of an adult modeling a response to each facial expression. In the context of a multiple probe across behaviors design, two participants correctly responded to all facial expressions across people and settings after viewing the video models one or two times. Experimental control was achieved with the other participant though he required more training sessions and was less consistent with responding. Future researchers should evaluate ways to teach and test responding to facial expressions under naturalistic conditions.

مقدمه انگلیسی

Introduction Responding to people's facial expressions is necessary for observational learning, showing empathy, and other social processes (Clark et al., 2008 and Ekman, 1984). Children with autism spectrum disorders (ASD) exhibit delays and deficits in responding to people's faces and emotions (Dawson et al., 2005 and Kasari et al., 1993). Numerous studies have shown that when compared with typically developing children, children with autism have difficulty responding to facial expressions (e.g., happy, sad, angry; Grossman and Tager-Flusberg, 2008, Klin et al., 1999 and Wright et al., 2008), recognizing emotions (Dyck et al., 2001 and Rump et al., 2009), and perceiving gaze (Ashwin, Ricciardelli, & Baron-Cohen, 2009). Researchers have found that when looking at faces, typically developing individuals commonly look at people's eyes whereas individuals with autism look at people's mouths and inanimate objects (McPartland et al., 2011, Riby et al., 2009 and Spezio et al., 2007). To complicate matters further, facial expressions often have durations lasting microseconds, and children with autism have more difficulty recognizing faces when presented rapidly (Beall et al., 2008 and Clark et al., 2008). Finally, functional magnetic resonance imaging (fMRI) has shown that the amygdala, fusiform gyrus, and other parts of the brain have different types of activity when people with autism versus people of typical development look at faces (Ashwin et al., 2007, Ishitobi et al., 2011, Kleinhans et al., 2008, Ogai et al., 2003, Pelphrey et al., 2002 and Wang et al., 2004). Because responding to facial expressions is critical for succeeding in social situations, research is needed to identify ways to teach this repertoire. Teaching empathy skills and perspective-taking to children with ASD has gained recent empirical attention (Gena et al., 1996, Harris et al., 1990, Reeve et al., 2007 and Schrandt et al., 2009). Schrandt et al. used a treatment package to teach 4 children with autism to emit empathic statements in the context of pretend play. The stimuli were dolls and puppets who emoted sadness/pain, happiness/excitement, and frustration. For example, the experimenter took a doll and had it bump the table and say, “Ouch.” A correct response was “Are you ok?” and patting the puppet. The training package consisted of prompt delay, modeling using scripts, manual prompts, behavioral rehearsals, and reinforcement. In the context of a multiple baseline across participants design, this package increased the participants’ frequencies of empathy responses. Generalization of empathic responding across stimuli and to actual people was observed. Two limitations of this study were the use of dolls and puppets as teaching agents and the use of a large treatment package. In a study using a narrower set of interventions, Bernad-Ripoll (2007) used self-as-model videos and social stories to teach a 9-year-old boy with Asperger's syndrome to recognize his own emotions. The boy was shown videos of himself engaging in common activities (e.g., making his bed) with either a positive or negative emotional affect. Targeted emotions included frustration, happiness, anxiety, boredom, calmness, anger, and excitement. In the probe, he was asked, “How are you feeling?” “Why did you feel like this?” and “What should you do next time?” The intervention was showing the boy social stories with pictures and text explaining the emotions he was feeling. Food and preferred outings were used to reinforce watching the videos. The intervention increased the percentage of times the boy correctly labeled his emotions and the percentage of correct explanations and action responses. These effects were evaluated in an AB design, a limitation of the study. There was evidence of generalization of the target behaviors to situations that arose incidentally. The use of videos was an interesting feature of the study by Bernad-Ripoll, though no studies were found that used video modeling to teach children to respond to facial expressions. Video modeling is an intervention in which a video of an adult, another child, or oneself demonstrating desired behaviors or skills is shown to a target individual (see Bellini & Akullian, 2007 for a review). Researchers have used video modeling to teach play skills (Blum-Dimaya et al., 2010, Boudreau and D’Entremont, 2010, Hine and Wolery, 2006, MacDonald et al., 2009, Nikopoulos and Keenan, 2007, Palechka and MacDonald, 2010, Paterson and Arco, 2007, Reagon et al., 2006 and Sancho et al., 2010), self-help skills (Bidwell and Rehfeldt, 2004, Cannella-Malone et al., 2011, Mechling et al., 2009, Rosenberg et al., 2010 and Shipley-Benamou et al., 2002), social skills (Buggey et al., 2011 and Tetreault and Lerman, 2010), imitation skills (Cardon and Wilcox, 2011 and Kleeberger and Mirenda, 2010), conversation skills (Scattone, 2008), iPod use (Hammond et al., 2010 and Kagohara, 2011), vocational skills (Allen, Wallace, & Renes, 2010), transition skills (Cihak, 2011, Cihak and Ayres, 2010 and Cihak et al., 2010), and reading skills (Marcus & Wilder, 2009). Video modeling has been effective with children with autism, perhaps because they often enjoy watching videos. In the area of emotions and facial expressions, three studies were found that used video modeling to teach perspective-taking skills to children with autism (Charlop-Christy and Daneshvar, 2003, Charlop-Christy et al., 2000 and LeBlanc et al., 2003). Perspective-taking involves identifying what another person is thinking, and requires observation of subtle social situations similar to responding to facial expressions (Sigman & Capps, 1997). The Sally-Anne false-belief task is a common way to detect perspective-taking (Baron-Cohen, Leslie, & Frith, 1985). In this task, a child is shown two puppets that “see” an adult put an object under a bowl. One puppet “leaves,” and the adult puts the object under a cup. When the puppet “returns,” the child is asked where the puppet thinks the object is. If the child says, “cup,” there is evidence of a delay in perspective taking. LeBlanc et al. (2003) used video modeling to teach three children with autism to respond to this task and two similar tasks. The video modeling consisted of an adult performing the task with the camera zooming in on relevant parts. The experimenter paused the video at points to ask the participant to repeat the responses modeled in the video. Increases in correctly responding to false-belief tasks in the absence of modeling or feedback were observed for the three participants in the context of a multiple baseline across tasks design. The purpose of the current study was to extend the research on teaching children with autism to respond to facial expressions. The participants’ teacher reported that they could respond to simple facial expressions (e.g., sad, happy, surprised, angry), but not to more subtle facial expressions such as impatience, disgust, and approval. Many previous studies used static pictures or puppets, and we wanted to present the faces in videos, as in the study by Bernad-Ripoll. In many previous studies, labeling the facial expression was the dependent variable, whereas we were interested in teaching children to respond to facial expressions, as in the study by Schrandt et al. Video modeling was the only intervention we used, and another purpose of this study was to extend that literature. In addition to teaching the participants to respond to eight facial expressions, we assessed maintenance and generalization across people and settings

نتیجه گیری انگلیسی

Results In baseline, Hank did not emit any correct responses to the eight facial expressions, either on the video or with his teacher in the cubby (Fig. 1). After seeing the model just one time, he correctly responded to all eight facial expressions, and those responses maintained. He also responded to the eight facial expressions with his aide and his teacher making the facial expressions in his typical classroom, with a researcher making the faces in the teaching cubby, and with a typical child making the facial expressions in a classroom. The only incorrect responses following video modeling were when responding to facial expressions from his aide in the typical classroom. Number of correct responses to 8 facial expressions in baseline and post-video ... Fig. 1. Number of correct responses to 8 facial expressions in baseline and post-video modeling acquisition and generalization probes for Hank. Figure options Ken correctly responded to “bored,” “calming,” and “pain” in baseline, though those responses went to zero over the course of that baseline (Fig. 2). He did not correctly respond to the other five facial expressions in baseline. Once video modeling began, Ken correctly responded to the facial expressions in each tier in the multiple probe design, and those responses maintained over time. On Session 9, Ken correctly responded to all eight facial expressions emitted by his teacher in the empty classroom. In a probe of responding to his aide in the typical classroom, Ken was correct on 6 out of 8 facial expressions. Number of correct responses to 8 facial expressions in baseline and video ... Fig. 2. Number of correct responses to 8 facial expressions in baseline and video modeling acquisition and generalization probes for Ken. Figure options In baseline, Bill correctly responded to 2 of the 8 facial expressions, and in the first two tiers of the multiple probe design his responding was consistent (Fig. 3). In the first tier (“approval,” “impatient,” and “bored”), after three sessions of video modeling, Bill correctly responded to the three facial expressions. These responses maintained to a high degree, though he occasionally was incorrect on one response in maintenance probes. In the second tier of the design, training was effective in increasing correct responses to facial expressions, though Bill never correctly responded to all four facial expressions in this tier. With “calming” in the third tier, Bill was inconsistent at first and then was consistently correct in responding to this facial expression. In Session 9, Bill correctly responded to 4 out of 8 facial expressions emitted by his teacher in the cubby. On Session 12, he correctly responded to 6 out of 8 facial expressions emitted by his teacher in his typical classroom. In the maintenance probes conducted 4 and 6 months later, Bill correctly responded to only 3 of the 8 facial expressions. Number of correct responses to 8 facial expressions in baseline and video ... Fig. 3. Number of correct responses to 8 facial expressions in baseline and video modeling acquisition and generalization probes for Bill.