درک قطعی از حالات صورت عاطفی و زبانی
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
37770 | 2009 | 14 صفحه PDF |

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Cognition, Volume 110, Issue 2, February 2009, Pages 208–221
چکیده انگلیسی
Abstract Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.
مقدمه انگلیسی
Introduction In his book “The expression of the emotions in man and animals”, Darwin (1872) noted that Laura Bridgman, a woman who was born deaf and blind, was able to spontaneously express a wide range of affective facial expressions that she was never able to observe in others. This case was one of the intriguing arguments Darwin put forth in support of the evolutionary underpinnings of facial expressions in humans. Since then, an extensive body of research, including seminal cross-cultural studies by Ekman and colleagues (Ekman, 1994, Ekman and Friesen, 1971 and Ekman et al., 1987), has provided empirical support for the evolutionary basis of affective facial expression production. Yet, the developmental and neural mechanisms underlying the perception of facial expressions are still not fully understood. The ease and speed of facial expression perception and categorization suggest a highly specialized system or systems, which prompt several questions. Is the ability to recognize and categorize facial expressions innate? If so, is this ability limited to affective facial expressions? Can the perception of affective facial expressions be modified by the experience? If so, how and to what extent? This study attempts to elucidate some of these questions by investigating categorical perception for facial expressions in Deaf1 users of American Sign Language – a population for whom the use of facial expression is required for language production and comprehension. Categorical perception (CP) is a psychophysical phenomenon that manifests itself when a uniform and continuous change in a continuum of perceptual stimuli is perceived as discontinuous variations. More specifically, the perceptual stimuli are seen as qualitatively similar within categories and different across categories. An example of categorical perception is the hue demarcations in the rainbow spectrum. Humans with normal vision perceive discrete color categories within a continuum of uniform linear changes of light wavelengths ( Bornstein & Korda, 1984). The difference between green and yellow hues are more easily perceived than the different hues of yellow, even if the change distances in the wavelength frequencies are exactly the same. Livingston, Andrews, and Harnad (1998) argued that this phenomenon is a result of compression (differences among stimuli in the same category are minimized) and expansion (stimuli differences among categories are exaggerated) in the perceived stimuli relative to a perceptual baseline. The compression and expansion effects may reduce the continuous perceptual stimuli into simple, relevant, and manageable chunks for further cognitive processing and concept formation. Recently, several studies have indicated that linguistic category labels play a role in categorical perception (Gilbert et al., 2006, Roberson et al., 2007 and Roberson et al., 2008). Roberson and Davidoff (2000) found that verbal interference eliminates CP effects for color, although this effect was not observed when participants were unable to anticipate the verbal interference task (Pilling, Wiggett, Özgen, & Davies, 2003). Roberson et al. (2008) found CP effects for Korean, but not for English speakers for color categories that correspond to standard color words in Korean, but not in English. Roberson et al. (2007) propose that linguistic labels may activate a category prototype, which biases perceptual judgments of similarity (see also Huttenlocher, Hedges, & Vevea, 2000). Investigation of the CP phenomenon in various perceptual and cognitive domains has provided insights into the development and the working of mechanisms underlying different cognitive processes. For example, our understanding of cognitive development in speech perception continues to evolve through a large body of CP studies involving voice-onset time (VOT). VOT studies have shown that English listeners have a sharp phoneme boundary between /ba/ and /pa/ sounds which differ primarily in the onset time of laryngeal vibration (Eimas et al., 1971 and Liberman et al., 1957). Other variants of speech CP studies have shown that 6-month-old infants can easily discern speech sound boundaries from other languages not spoken by their families. For example, infants from Japanese speaking families can distinguish /l/ and /r/ sounds, which are difficult for adult speakers of Japanese to distinguish; however, when they reach 1 year of age, this ability diminishes (Eimas, 1975). Bates, Thal, Finlay, and Clancy (2003) argue that the decline of ability to discern phonemic contrasts that are not in one’s native language neatly coincides with the first signs of word comprehension, suggesting that language learning can result in low-level perceptual changes. In addition, Iverson et al. (2003) used multi-dimensional scaling to analyze the perception of phonemic contrasts by Japanese, German, and American native speakers and found that the perceptual sensitivities formed within the native language directly corresponded to group differences in perceptual saliency for within- and between category acoustic variation in English /r/ and /l/ segments. Similar effects of linguistic experience on categorical perception have been found for phonological units in American Sign Language (ASL). Using computer generated continua of ASL hand configurations, Emmorey, McCullough, and Brentari (2003) found that Deaf ASL signers exhibited CP effects for visually presented phonologically contrastive handshapes, in contrast to hearing individuals with no knowledge of sign language who showed no evidence of CP effects (see Brentari (1998), for discussion of sign language phonology). Baker, Isardi, Golinkoff, and Petitto (2005) replicated these findings using naturally produced stimuli and an additional set of contrastive ASL handshapes. Since categorical perception occurred only for specific learned hand configurations, these studies show that CP effects can be induced by learning a language in a different modality and that these effects emerged independently of low-level perceptual contours or sensitivities. 1.1. Categorical perception for facial expression Many studies have found that affective facial expressions are perceived categorically when presented in an upright, canonical orientation (Calder et al., 1996, Campanella et al., 2002, de Gelder et al., 1997, Etcoff and Magee, 1992, Herpa et al., 2007, Kiffel et al., 2005 and Roberson et al., 2007). Campbell, Woll, Benson, and Wallace (1999) undertook a study to investigate whether CP effects for facial expressions extend to learned facial actions. They examined whether Deaf signers, hearing signers, and hearing non-signers exhibited CP effects for syntactic facial expressions marking yes–no and Wh-questions in British Sign Language (BSL) and for similar affective facial expressions: surprised and puzzled. Syntactic facial expressions are specific linguistic facial expressions that signal grammatical contrasts. Yes–no questions in BSL (and in ASL) are marked by raised brows, and Wh-questions are marked by furrowed brows. Campbell et al. (1999) found no statistically significant CP effects for these BSL facial expressions for any group. However, when participants from each group were analyzed individually, 20% of the non-signers, 50% of the Deaf signers, and 58% of the hearing signers demonstrated CP effects for BSL syntactic facial expressions. Campbell et al. (1999) suggested that CP effects for BSL expressions may be present but weak for both signers and non-signers. In contrast, all groups showed clear CP effects for the continuum of surprised-puzzled facial expressions. Campbell et al. (1999) acknowledged several possible methodological problems with their study. First, the groups differed significantly in the age at which BSL was acquired. The mean age of BSL acquisition was 20 years for the hearing signers and 7 years for the Deaf signers. Several studies have shown that the age when sign language was acquired is critical and has a lifelong impact on language proficiency (e.g., Mayberry, 1993 and Newport, 1990). Second, and perhaps more importantly, only six images were used to create the stimuli continua (two end-points and four intermediates). With such large steps between the end-points, the category boundaries and CP effects observed are somewhat suspect. 1.2. Facial expressions in American Sign Language The present studies address these methodological issues and investigate whether categorical perception extends to linguistic facial expressions from American Sign Language. Although ASL and BSL are mutually unintelligible languages, both use facial expressions to signal grammatical structures such as Wh-questions, yes–no questions, conditional clauses, as well as adverbials (Baker-Shenk, 1983, Liddell, 1980 and Reilly et al., 1990a). Linguistic facial expressions differ from affective expressions in their scope, timing and in the facial muscles that are used (Reilly, McIntire, & Bellugi, 1990b). Linguistic facial expressions have a clear onset and offset and are highly coordinated with specific parts of the signed sentence. These expressions are critical for interpreting the syntactic structure of many ASL sentences. For example, both Wh-questions and Wh-phrases are accompanied by a furrowed brow, which must be timed to co-occur with the manually produced clause, and syntactic scope is indicated by the location and duration of the facial expression. In addition, yes–no questions are distinguished from declarative sentences by raised eyebrows, which must be timed with the onset of the question. Naturally, ASL signers also use their face to convey affective information. Thus, when perceiving visual linguistic input, signers must be able to quickly discriminate and process different linguistic and affective facial expressions concurrently to understand the signed sentences. As a result, ASL signers have a very different perceptual and cognitive experience with the human face compared to non-signers. Indeed, behavioral studies have suggested that this experience leads to specific enhancements in face processing abilities. ASL signers (both deaf and hearing) performed significantly better than non-signers when memorizing faces ( Arnold & Murray, 1998), discriminating faces under different spatial orientations and lighting conditions ( Bettger, Emmorey, McCullough, & Bellugi, 1997), and discriminating local facial features ( McCullough & Emmorey, 1997). Given these differences in the perceptual and cognitive processing of faces, we hypothesize that ASL signers may have developed different internal representations of facial expressions from non-signers, due to their unique experience with human faces. Using computer-morphing software, we conducted two experiments investigating categorical perception for facial expressions by Deaf signers and hearing non-signers. We hypothesized that both groups would exhibit categorical perception for affective facial expressions, and that only Deaf signers would exhibit a CP effect for linguistic facial expressions. If the hearing group also demonstrates CP effects for linguistic facial expressions, it will suggest that ASL facial expressions may originate from natural categories of facial expression, perhaps based on affective or social facial displays. If both groups fail to demonstrate a CP effect for linguistic facial expressions, it will indicate that the perception of affective facial expressions may be innate and domain specific.