دانلود مقاله ISI انگلیسی شماره 37971
ترجمه فارسی عنوان مقاله

مدل سازی صورت خود مبتنی بر واقعیت افزوده برای ترویج ابراز هیجانی و مهارت های اجتماعی نوجوانان با اختلالات طیف اوتیسم

عنوان انگلیسی
Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
37971 2015 8 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Research in Developmental Disabilities, Volume 36, January 2015, Pages 396–403

ترجمه کلمات کلیدی
واقعیت افزوده (AR) - احساسات - خود صورت مدل سازی - سه بعدی (3-D) حالت چهره - 3-D انیمیشن مربوط به صورت
کلمات کلیدی انگلیسی
Augmented reality (AR); Emotions; Self-facial modeling; Three-dimensional (3-D) facial expressions; 3-D facial animation
پیش نمایش مقاله
پیش نمایش مقاله  مدل سازی صورت خود مبتنی بر واقعیت افزوده برای ترویج ابراز هیجانی و مهارت های اجتماعی نوجوانان با اختلالات طیف اوتیسم

چکیده انگلیسی

Abstract Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people; this ability involves recognizing facial expressions. This study assessed the possibility of enabling three adolescents with ASD to become aware of facial expressions observed in situations in a school setting simulated using augmented reality (AR) technology. The AR system provided three-dimensional (3-D) animations of six basic facial expressions overlaid on participant faces to facilitate practicing emotional judgments and social skills. Based on the multiple baseline design across subjects, the data indicated that AR intervention can improve the appropriate recognition and response to facial emotional expressions seen in the situational task.

مقدمه انگلیسی

Introduction Autism spectrum disorders (ASD) are characterized by atypical patterns of behavior and impaired social communication (American, 2000 and Krasny et al., 2003). The challenge of social interaction for people with ASD is appropriately recognizing and understanding facial expressions that indicate emotions (Dawson et al., 2005, Ryan and Charragain, 2010 and Williams et al., 2012). People with ASD have difficulty understanding the expressions and emotional states of other people and determining their intentions and thought processes, which results in an impaired ability to respond with appropriate expressions and to interact appropriately with their peers (Krasny et al., 2003). In addition, facial expression processing is atypical in people with ASD (Annaz, Karmiloff-Smith, Johnson, & Thomas, 2009); although some people with high-functioning autism (HFA) are relatively adept at social communication involving complex facial emotions, they have difficulty with nonverbal communication (Elder, Caterino, Chao, Shacknai, & De Simone, 2006). Emotion recognition is among the skills most crucial to social interaction and developing empathy (Baron-Cohen, 2002). Relevant studies have described empathy as a lens through which people comprehend emotional expressions and respond appropriately (Sucksmith, Allison, Baron-Cohen, Chakrabarti, & Hoekstra, 2013). However, people with ASD have deficits that include being unable to view events from the perspective of other people and to respond with appropriate expressions (Baron-Cohen and Belmonte, 2005 and Baron-Cohen et al., 1985). Research on emotional impairment in ASD has focused primarily on examining emotional recognition and understanding and on teaching facial expressions by labeling them on formatted photographs (Ashwin et al., 2005, Begeer et al., 2006, Ben Shalom et al., 2006, Castelli, 2005 and Wang et al., 2004). For example, various facial expressions in photos and videos have been used to develop the communication skills of people with ASD, which enables them to focus on the specific visual representations and facial cues according to which the facial emotions of others can be determined (Blum-Dimaya, Reeve, Reeve, & Hoch, 2010). Current intervention systems used for people with ASD involve applying a third-person perspective to recognize and manipulate feelings based on the facial synthesis of 3-D characters; they support reusability of facial components, and have an avatar-user interaction model with real time responses (Kientz, Goodwin, Hayes, & Abowd, 2013), for example, online games that depict an imaginary world from a third-person perspective to represent the actions and statuses of other people. However, although the expressions of an avatar or cartoon character (Tseng & Do, 2010) facilitate learning using emotions, methods in which information is not presented from the perspective of the participants do not enable people with ASD to see the expressions on their own faces and thereby connect the expression with thoughts (Young & Posselt, 2012). In addition, video self-modeling (VSM) has been used for social skills training and involves participants watching a video of a person modeling a desired behavior and then imitating the behavior of the person in the video (Axe & Evans, 2012). However, using VSM as an intervention strategy for individuals with ASD does not enable immediate feedback on the facial states of the participants during a scenario. These systems simply record the events occurring during the scenario and the physical behaviors imitated by the participants; therefore, participants have difficulty obtaining self-facial expression instruction. People with ASD experience difficulty in accessing self-facial expression treatment because training scenarios and real-time mood simulations in which people can pretend to feel various emotions are unavailable. Thus, emerging technologies such as augmented reality (AR) can be applied to teach learners to explore material from various perspectives (Asai, Kobayashi, & Kondo, 2005). Because these technologies have the potential to stimulate the senses of the user, they may be particularly useful for teaching subject matter that learners have difficulty experiencing in the real world (Chien et al., 2010 and Shelton and Hedley, 2002) and for facilitating social interaction. In addition, unlike traditional learning content that provides only static texts and facial images to describe an emotional expression, the AR instructional model can be used to present the core learning content directly to participants with ASD and assist them in exploring self-facial expression. Therefore, we created an AR application that can be used to increase emotional expression recognition and social skills.

نتیجه گیری انگلیسی

Results Experimental data on Zhu, Lin, and Lai in each phase were analyzed. The baseline phase consisted of 3 sessions for Zhu, 5 sessions for Lin, and 7 sessions for Lai. The intervention phase consisted of 7 sessions for all participants. The follow-up phase consisted of 8 sessions for Zhu, 6 sessions for Lin, and 4 sessions for Lai. In the baseline phase, the participants could not easily determine the emotions that the six facial expressions represented; specifically, they frequently confused fear and disgust. Moreover, they could not understand the events and appropriately recognize and respond to the facial expressions. Although the participants could choose the correct adjectives to describe emotions, they could not identify the facial expressions that corresponded to the emotions. During the intervention phase, learning and practicing using the ARSFM enabled the participants to compare each 3-D facial model feature enthusiastically and actively, thereby improving their social skills and ability to differentiate emotional facial expressions. Fig. 4 shows the mean correct assessment rates of the three participants after using the ARSFM learning system. The curve indicates that the correct assessment rates of the participants had improved after training and that the participants retained in the follow-up phase the emotional expression and social skills that they had learned in the intervention phase. During the baseline phase (three sessions), the mean correct assessment rate was approximately 20% for Zhu. During the intervention phase (seven sessions), that rate rose to 96.43%. During the follow-up phase (eight sessions), it was 81.25%. The mean correct assessment rate for Lin was approximately 27% during the baseline phase (five sessions). It increased to 92.14% during the intervention phase (seven sessions), and was 80.83% during the follow-up phase (six sessions). During the baseline phase (seven sessions), the mean correct assessment rate was approximately 38.75% for Lai. During the intervention phase (seven sessions), that rate increased to 92.85%. During the follow-up phase (four sessions), it was 80.75% (Fig. 4). Correct assessment rates of the participants during three testing phases. Fig. 4. Correct assessment rates of the participants during three testing phases. Figure options The Kolmogorov–Smirnov test (Siegel & Castellan, 1988) was used to analyze the data from the three phases. The mean difference in performance level between the baseline and intervention phases was significant (p < 05) for all participants. In addition, the mean difference in performance level between the baseline and follow-up phases was significant (p < 05).