نشانه های متعدد در درک اجتماعی: دوره زمان پردازش نژاد و حالت چهره
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37704||2007||15 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Experimental Social Psychology, Volume 43, Issue 5, September 2007, Pages 738–752
Abstract The purpose of the present study was to examine the time course of race and expression processing to determine how these cues influence early perceptual as well as explicit categorization judgments. Despite their importance in social perception, little research has examined how social category information and emotional expression are processed over time. Moreover, although models of face processing suggest that the two cues should be processed independently, this has rarely been directly examined. Event-related brain potentials were recorded as participants made race and emotion categorization judgments of Black and White men posing either happy, angry, or neutral expressions. Our findings support that processing of race and emotion cues occur independently and in parallel, relatively early in processing.
Introduction Faces convey important social information that is useful for a variety of inferences. For instance, information about racial group membership and emotional expression can be informative about an individual’s likely traits, attributes, and behavioral intentions (Bodenhausen and Macrae, 1998, Brewer, 1988, Devine, 1989 and Fiske et al., 1999), and perhaps not surprisingly, extensive research has documented our ability to quickly and efficiently extract both types of information from faces (Eimer and Holmes, 2002, Eimer et al., 2003, Felmingham et al., 2003, Holmes et al., 2003, Ito et al., 2004, Ito and Urland, 2003, Ito and Urland, 2005, James et al., 2001, Schupp et al., 2003, Schupp et al., 2004 and Vanderploeg et al., 1987). While it is possible to consider how individual sources of information affect social inferences, such as how race affects evaluations, it is also important to consider the more naturalistic question of how multiple sources of social information are processed from faces. That is, we can consider how both race and emotion information are processed from the same face. Models of face processing suggest that information about social identity and emotional expression are processed separately and in parallel (Bruce and Young, 1986 and Haxby et al., 2000). According to Bruce and Young (1986), social category information such as age and gender and information regarding emotional expression are processed by functionally separate components of the face perception system. Moreover, these separable components are assumed to operate in parallel (see also Mouchetant-Rostaing & Giard, 2003), suggesting little interaction between the two types of information, at least in initial stages of perception. Haxby et al. (2000) have similarly argued that the perception of invariant, relatively unchangeable aspects of a face, which would presumably include social category information, are theoretically and neurally dissociable from the perception of changeable aspects of a face such as facial expressions. However, despite the theoretical importance of this issue, little research has simultaneously examined the perception of race and expression. Instead, the majority of previous research has examined these cues in isolation, independent of one another; research on race perception typically examines responses to faces with emotionally neutral expressions (e.g., Devine et al., 2002, Greenwald and Banaji, 1995 and Levin, 2000), and research on emotional expression often examines responses to faces depicting only one race (typically Caucasian) (e.g., Eimer et al., 2003 and Hansen and Hansen, 1988). While it is necessary to investigate the independent effects of these cues, they are typically perceived simultaneously, making it important to understand how the cues are perceived in combination throughout processing. Our purpose in the present study was therefore to examine how both race and expression information are processed from the same face. In addition, we wished to examine the time course of race and emotion processing. The development of increasingly sophisticated social cognition and neuroscience methods has allowed social psychologists to obtain more precise information about the timing and ordering of social perception processes. This research tells us that what may be considered a single outcome (“face perception”) is likely composed of numerous, theoretically separate processes (Haxby et al., 2000). Using measures with sensitive time course information allows us to examine not only how race and expression information are processed, but also how this unfolds over time, starting with early perceptual processes and ending with explicit race and expression categorization judgments. The importance of examining time course information is highlighted by recent research demonstrating interactive effects between race and emotion information in explicit categorization judgments. Hugenberg and Bodenhausen, 2003 and Hugenberg and Bodenhausen, 2004 have found both that facial expressions influence racial categorization of racially-ambiguous faces and that race influences emotion perception of emotionally-ambiguous faces. Specifically, consistent with cultural stereotypes linking Blacks with more negative attributes, especially with aspects related to hostility and threat, White participants higher in implicit prejudice were more likely to categorize racially-ambiguous faces as African American when the face displayed an angry expression as opposed to a happy expression (Hugenberg & Bodenhausen, 2003). Similarly, White individuals high in implicit prejudice more readily perceived anger than happiness in the faces of unambiguous Black but not White faces (Hugenberg & Bodenhausen, 2004). Moreover, when categorizing by facial expression, participants respond more quickly to angry and sad facial expressions than happy expressions for Black targets, both of which are affectively-congruent with negative evaluations, but respond more quickly for happy facial expressions for White targets (Hugenberg, 2005). These effects differ from those predicted by face processing models. This may be accounted for by the type of response considered. Face processing models treating race and expression processing as independent are generally interpreted as referring to relatively early stages of perception whereas Hugenberg and Bodenhausen measured explicit, self-reported response. The difference in the pattern of results and type of responses considered suggests the importance of examining a wider time course of responding in order to determine if the likelihood of interactions among race and expression information differs as a function of type of response assessed. In addition, it is worth noting that the interactive effects just discussed were obtained in situations where one of the cues was ambiguous, and/or with somewhat atypical-looking faces generated from animation software. This raises the question of whether interactive effects among types of facial information are more likely when cues are ambiguous and/or novel, or whether they occur more broadly and should therefore also be expected when perceiving actual faces with unambiguous race and facial expressions. We will examine this in the present study by investigating the simultaneous perception of race and expression information when both are relatively easy to perceive. Investigating the time course of social perception Investigating the time course of social perception requires a measure that has both high temporal resolution and is known to be sensitive to different psychological processes that may occur during the perception of race and expression. Event-related brain potentials (ERPs) are particularly well suited for this. ERPs utilize high sampling rates to record electrical brain activity that can index early attentional and perceptual processing across time. ERP waveforms are comprised of distinguishable components, some negative going and some positive going, that occur at a particular latency and over particular scalp sites. Individual components are thought to reflect different psychological processes, and multiple components are typically elicited by the same stimulus, allowing for the measurement of multiple responses that unfold over time to the same stimulus. In particular, component amplitude is thought to reflect the degree to which the psychological process associated with the individual component has been engaged and component latency is thought to reflect the point in time by which the psychological operation has been completed. ERP components relevant to race and expression perception We focus on four particular ERP components of relevance to the perception of race and expression. These components provide a basis for examining sensitivity to race and expression information from faces across time. Past research suggests that some of the earliest components sensitive to race and expression information—including the N100, a negative deflection occurring around 100 ms after stimulus onset, and the P200, a positive deflection occurring around 200 ms—may reflect orienting to threatening and/or distinctive stimuli. At the N100, threatening primes elicit more attention (i.e. larger amplitudes) than positive primes (Weinstein, 1995); in a passive viewing task, angry faces elicit larger amplitudes than neutral faces (Felmingham et al., 2003), and Black faces elicit larger amplitudes than White faces from White participants (Ito and Urland, 2003 and Ito and Urland, 2005). A similar pattern of results are obtained for drawings of faces, where negative expressions elicit larger amplitudes at the N100 than happy expressions (Vanderploeg et al., 1987). Previous research on the P200 has obtained a similar pattern of results, with larger P200s to angry and fearful facial expressions than to neutral facial expressions (Eimer and Holmes, 2002 and Eimer et al., 2003), and larger P200s to Blacks than Whites from White participants (Ito and Urland, 2003 and Ito and Urland, 2005). These findings of greater attention to more threatening and/or more distinctive and novel stimuli may reflect an automatic vigilance effect in which attention is quickly and relatively automatically directed to stimuli with potentially negative implications for the self (Carretié, Mercado, Tapia, & Hinojosa, 2001). We expect to replicate past effects showing greater attention and larger N100s and P200s to angry as compared to happy and neutral expressions (cf., Eimer and Holmes, 2002, Eimer et al., 2003, Felmingham et al., 2003 and Vanderploeg et al., 1987); similarly, Black faces should elicit greater attention and therefore larger N100s and P200s than White faces from the largely non-White participants in the present sample (Ito and Urland, 2003 and Ito and Urland, 2005). We will also examine whether these cues are processed in parallel, or appear to interact at this point in time in order to examine assumptions that race and expression are processed independently. Slightly later in processing, the N200, a negative deflection occurring at approximately 250 ms, has similarly been broadly associated with selective attention, but in the context of person perception, N200 amplitude has been more specifically associated with deeper processing of faces participants might benefit from individuating or have practice individuating. For instance, N200s are larger to pictures of one’s own face than to other’s faces (Tanaka, Curran, Porterfield, & Collins, 2006), and to famous as compared with unfamiliar faces (Bentin & Deouell, 2000). In the case of expression and race, although automatic vigilance mechanisms make it adaptive to initially devote greater attentional resources to angry faces and faces of racial outgroup members (as reflected in the N100 and P200), individuals with positive or neutral expressions, and those who are racial ingroup members are probably more desirable for greater individuation. Consequently, this component is typically larger to neutral than fearful faces (Eimer and Holmes, 2002 and Holmes et al., 2003) and to faces of one’s own race than other races (Ito et al., 2004, Ito and Urland, 2003, James et al., 2001, Tanaka et al., 2004 and Walker and Tanaka, 2003). For the N200, individuation and deeper processing predict larger N200s to faces displaying happy, approachable expressions and to Whites (cf., Holmes et al., 2003, Ito and Urland, 2003, James et al., 2001, Tanaka et al., 2004 and Walker and Tanaka, 2003). And again, we are interested in whether race and facial expression interact at the N200. A final component typically measured when investigating affect is the P300, a positive deflection occurring after 300 ms. This component is often conceptualized as indexing attention to motivationally relevant stimuli or stimuli that are highly arousing. Typically, this takes the form of larger amplitudes to emotionally-valenced stimuli (Dolcos and Cabeza, 2002 and Eimer et al., 2003). When utilizing faces, fearful faces have elicited larger P300s than neutral faces (Eimer and Holmes, 2002 and Holmes et al., 2003) but angry faces have also elicited larger P300s than both happy and neutral facial expressions (Schupp et al., 2003 and Schupp et al., 2004) which may reflect the greater arousal associated with viewing angry compared with happy expressions. Past P300 research showing larger responses to more arousing stimuli predicts larger P300s to angry expressions. This component has not been examined for race perception using the type of paradigm we employ here, but findings of larger P300s to more arousing stimuli may predict larger P300s to racial outgroup members.1 As with the other components, we will assess whether the two types of cues combine to affect the P300. In addition to ERP responses, we also recorded response latencies as participants made explicit race and emotion categorizations in order to obtain a behavioral indication of whether race and emotion influence each other. Predicted results can be generated from several lines of research. Research on emotion perception has found faster categorization of happy than angry facial expressions (Eimer et al., 2003; see Leppänen & Hietanen, 2003, for a review). This is thought to occur because happy faces have a higher base rate and are therefore more familiar and easier to categorize (Beaupré & Hess, 2003). Happy expressions may also be more discriminable than negative expressions because the latter are more similar to each other in featural arrangement (Johnston, Katsikitis, & Carr, 2001). An extensive body of research also shows faster categorization of and greater attentional capture by Black than White faces (e.g., Levin, 2000 and Stroessner, 1996), predicting faster responses to Black than White faces. Combining these two effects predicts the fastest responses to happy Black faces. However, as noted earlier, Hugenberg (2005) found evaluative congruency effects in emotion categorization response times such that responses were fastest to Black faces displaying negative expressions and to White faces displaying happy expressions. The addition of a race categorization task will allow us to assess whether a similar evaluative congruency effect occurs in this type of categorizing. A final purpose of the study was to examine whether any effects obtained during the categorization of race and emotion are due to race and emotion cues per se or to low-level perceptual features that may covary with race and expression. We did this by recording ERPs as participants viewed face images that were both inverted and blurred, thereby preserving perceptual features such as color and luminance but leaving them unrecognizable as faces.
نتیجه گیری انگلیسی