دانلود مقاله ISI انگلیسی شماره 37952
ترجمه فارسی عنوان مقاله

روابط بین پردازش هویت صورت، ابراز هیجانی، گفتار صورت و جهت نگاه در طول توسعه

عنوان انگلیسی
The relationships between processing facial identity, emotional expression, facial speech, and gaze direction during development
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
37952 2010 19 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Experimental Child Psychology, Volume 105, Issues 1–2, January–February 2010, Pages 1–19

ترجمه کلمات کلیدی
پردازش چهره - هویت و صورت - سخنرانی و صورت - ابراز هیجان - زل زل نگاه کردن مسیر - توسعه ادراکی
کلمات کلیدی انگلیسی
Face processing; Facial identity; Facial speech; Emotional expression; Gaze direction; Perceptual development
پیش نمایش مقاله
پیش نمایش مقاله  روابط بین پردازش هویت صورت، ابراز هیجانی، گفتار صورت و جهت نگاه در طول توسعه

چکیده انگلیسی

Abstract Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding facial speech, emotional expression, and gaze direction or, alternatively, according to facial speech, emotional expression, and gaze direction while disregarding facial identity. Reaction times showed that children and adults were able to direct their attention selectively to facial identity despite variations of other kinds of face information, but when sorting according to facial speech and emotional expression, they were unable to ignore facial identity. In contrast, gaze direction could be processed independently of facial identity in all age groups. Apart from shorter reaction times and fewer classification errors, no substantial change in processing facial information was found to be correlated with age. We conclude that adult-like face processing routes are employed from 5 years of age onward.

مقدمه انگلیسی

Introduction Although the face recognition abilities of infants are impressive, face processing continues to develop and improve during the first decade of life. With experience, children are increasingly able to determine that a specific facial identity has been encountered before and to assess its familiarity. The recognition of facial identity, however, is only one of several crucial aspects of the face processing system that also processes other social information such as emotional expression, facial speech, and gaze direction. In everyday life, all this information is presented simultaneously, leading to questions about how the face processing system masters this complexity and whether different kinds of face information are processed independently of or in interaction with one another. Early research on face processing in adults assumed that processing of facial identity and emotional expression is independent (e.g., Bruce and Young, 1986 and Etcoff, 1984), but more recent studies have demonstrated interactions between these dimensions (e.g., Gallegos and Tranel, 2005, Schweinberger et al., 1999 and Schweinberger and Soukup, 1998). Very few studies have been conducted with children, and almost none of them has used the methods that have been applied in research on adults. As a consequence, the pattern of facial processing in children cannot be directly compared with that in adults. In the current study, we investigated the development of facial information processing in children using the methods that have previously been employed in adults (Schweinberger & Soukup, 1998). In particular, we examined whether or not 5- to 11-year-olds and adults process facial identity, facial speech, emotional expression, and gaze direction independently of or in interaction with one another. Face processing models The models of Bruce and Young (1986) and Haxby, Hoffman, and Gobbini (2000) are concerned with the integration of identity and social face information. In the Bruce and Young model, specialized modules for the processing of face identity, emotional expression and, facial speech are assumed to operate independently of one another. The authors argued that the initial visual encoding of an unfamiliar face results in viewer-centered descriptions, the so-called face recognition units, which form the basis for independent analyses of social face information (Bruce & Young, 1986). The Haxby and colleagues model postulates distributed face processing that employs two broadly defined systems. One is responsible for analyzing invariant aspects of faces, thereby building the basis for face identity recognition, and a second system is responsible for processing changeable aspects of faces such as emotional expression, facial speech, and gaze direction. The authors assumed that the systems interact and modulate each other resulting in a percept composed of identity and changeable social face information. In the following sections, we describe the extent to which previous findings from adults and children support independent or interactive processing of facial identity and social information such as emotional expression, facial speech, and gaze direction. Processing identity and emotional expression in adults and children In line with Bruce and Young (1986), some results indicate an independent processing of face identity and emotional expression in adults (Calder et al., 2000, Etcoff, 1984 and Young et al., 1986). Further evidence for independent processing comes from brain studies. Single-cell recordings of the temporal cortex of monkeys (Desimone, 1991), as well as research in patients with prosopagnosia (e.g., Humphreys et al., 1993 and Schweinberger et al., 1995), demonstrate activation of different brain areas for the processing of different categories of facial information. Other adult studies do not support an independent processing of face identity and emotional expression but rather indicate interactive processing (e.g., Ganel and Goshen-Gottstein, 2004 and Kaufmann and Schweinberger, 2004; Peng, 1989, quoted in Campbell et al., 1996 and Schweinberger et al., 1999). In Schweinberger and Soukup (1998), for example, participants sorted faces varying in two dimensions (facial identity and emotional expression) according to one dimension only while disregarding the second dimension. The second dimension was varied under three conditions: the dimension either was held constant (control condition), was correlated with the first dimension (correlated condition), or varied independently of the first dimension (orthogonal condition). The pattern of reaction times allows conclusions to be drawn about independent or interactive processing of the two dimensions. No difference in reaction times in all three conditions is indicative of independent processing, whereas interactive processing is indicated if (a) reaction times in the correlated condition are shorter (redundant gain) or (b) reaction times in the orthogonal condition are longer (interference effect) compared with the control condition. Schweinberger and Soukup found an asymmetric pattern of processing; whereas identity was processed independently of emotional expression, emotional expression was influenced by identity variation. Schweinberger and colleagues (1999) replicated this asymmetric pattern and demonstrated that it was not associated with faster facial identity processing. Even if the discrimination of identity was more difficult than that of emotional expression, the asymmetric pattern was still obtained. Several other studies using different methodological approaches supported the conclusion of an asymmetric pattern of processing (Campbell and de Haan, 1998, Ellis et al., 1990, Herzmann et al., 2004 and Kaufmann and Schweinberger, 2004). In sum, processing of facial identity and emotional expression in adults can be assumed to originate in independent functional face subsystems that interact in an asymmetric manner. There are few studies on the relationship between processing facial identity and emotional expression in children. Mondloch, Geldart, Maurer, and Le Grand (2003) showed adult-like performance in 6-year-olds for tasks involving matching facial expression and lip reading despite changes in facial identity. Next best was their performance for matching gaze direction despite changes in facial identity and for matching identity despite changes in facial expression. These and related studies (Benton and Van Allen, 1973 and Ellis, 1992) showed that children basically recognize specific face information despite changes in a face, but they did not determine the nature of the underlying processes. In an attempt to test the nature of the underlying processes, some studies provide evidence for an independent processing of facial identity and emotional expression. For example, Odom and Lemond (1974) showed such an independent processing in 6-year-olds, and Bruce and colleagues (2000) found inconsistent correlations between different recognition skills such as recognition of identity and social face dimensions in 4- to 10-year-olds. Other childhood studies have suggested that identity processing interacts with processing of emotional expression. Using a speeded sorting paradigm, Chapman (1981) found an asymmetric processing for eye identification and processing of emotional expression; whereas eye variation did not influence the processing of emotional expression, processing of emotional expression was influenced by variation of eyes. According to Diamond and Carey (1977), 6- to 10-year-olds used emotional expression as a cue to identify a facial identity that is often misleading. In sum, the results on children’s processing of face identity and emotional processing are mixed, and so far no definite conclusions can be drawn about how the two may interact. Processing facial identity and facial speech in adults and children Parallel to the results mentioned above, a pattern of inconsistencies can be found concerning the question of whether facial identity and facial speech are processed independently or in interaction with each other. Facial speech refers to characteristic shapes of lips, teeth, tongue, jaw, and cheeks (Schweinberger & Soukup, 1998). A double dissociation between deficits in analyzing identity and facial speech in brain-lesioned patients (Campbell, Landis, & Regard, 1986), as well as data from healthy persons that indicate right hemisphere usage for face identity processing versus left hemisphere usage for facial speech processing, indirectly suggests an independent mode of processing (Campbell, de Gelder, & de Haan, 1996). In addition, Campbell, Brooks, and colleagues (1996) showed more directly that facial speech perception is processed independently of face identity by demonstrating that familiarity of faces did not facilitate categorization of lip speech pictures. However, other results again suggest an interaction in processing facial identity and facial speech (de Gelder et al., 1991, Ellis et al., 1990, Rosenblum et al., 2000 and Walker et al., 1995). For example, Yakel, Rosenblum, and Fortier (2000) found that speakers’ identity had an effect on speech reading. More important, Schweinberger and Soukup (1998) showed processing of facial identity and facial speech to be asymmetrically related. Participants selectively attend to face identity, but when analyzing facial speech they were influenced by identity information. Thus, previous results on adults’ processing of facial identity and facial speech are very similar to those of identity and emotional expression processing; in adults, both seem to work in different subsystems with asymmetric interactions. There are virtually no studies that have explored how infants and children process face identity and facial speech. So far, it is only known that facial speech is important and helpful for young infants to understand speech segmentation. Studies about the perceived fit of spoken and heard speech (e.g., Meltzoff & Kuhl, 1994) and about the integration of seen and heard speech (e.g., Rosenblum, Schmuckler, & Johnson, 1997) suggest that 1-month-olds already discriminate different kinds of facial speech. According to Mondloch and colleagues (2003), an adult-like performance level in facial speech recognition can already be seen in 6-year-olds. However, how facial speech is processed exactly in relation to face identity has yet to be studied. Processing facial identity and gaze direction in adults and children Eye gaze is a central aspect of communication in our social world (for a review, see Frischen, Bayliss, & Tipper, 2007). The reflexive orientation to eye gaze suggests that its processing may happen independently of other processes (Tipples, 2005). This assumption is supported by results of functional magnetic resonance imaging (fMRI) studies in adults that located the representation of facial identity in the lateral fusiform gyrus and located eye gaze in the superior temporal sulcus and intraparietal sulcus (Hoffman & Haxby, 2000). The assumption is also in line with behavioral studies showing judgments of gaze direction to be relatively unaffected by changes of facial identity (Frischen and Tipper, 2004 and Tipples, 2005). The perception of eye gaze direction is already proficient in infants (e.g., Haith et al., 1977, Maurer and Salapatek, 1976 and Morton and Johnson, 1991). At 6 months of age infants start to look in the direction that adults turn their heads and gaze (Tomasello, 1999), and at 12 months of age the gaze of another person can catch infants’ attention (Corkum and Moore, 1995, Farroni et al., 2002 and Hood et al., 1998). Some infant studies on recognition of facial identity show an influence of gaze direction (Johnson and Farroni, 2003 and Reid et al., 2004), suggesting that direct gaze contact enhances infants’ face processing. However, during infancy and childhood, there are no studies examining how face identity and eye gaze direction are processed simultaneously. Therefore, it is unknown whether the assumed independent processing in adults is already present in children. The current study The goal of the current investigation was to determine whether the asymmetric processing of facial identity in relation to emotional expression and facial speech seen in adults is also present in children between 5 and 11 years of age. We used Schweinberger and Soukup’s (1998) computer-based speeded sorting task to study the development of facial identity processing from childhood to adulthood in relation to social face information provided by facial speech (Experiment 1), emotional expression (Experiments 2 and 3), and gaze direction (Experiment 4). In all experiments, participants sorted faces according to facial identity while disregarding facial speech, emotional expression, and gaze behavior or, alternatively, according to facial speech, emotional expression, and gaze direction while disregarding facial identity. Schweinberger and Soukup’s results predict an asymmetric processing of facial identity in relation to emotional expression and facial speech. In contrast, according to the results of Tipples (2005) and Frischen and Tipper (2004), adults may be expected to process facial identity and gaze direction independently of one another. For children, the empirical evidence is too small to derive specific hypotheses about independent or interactive processing of different face information. However, given that the face processing system matures relatively early in development and that even young children learn to differentiate faces and social face information, parallels between children and adult face processing appear to be highly likely.