کاریکاتور حالات چهره
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37580||2000||42 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Cognition, Volume 76, Issue 2, 14 August 2000, Pages 105–146
Abstract The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how ‘face-like’ the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less ‘face-like’. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms – a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.
1. Introduction In a recent study, Calder, Young, Rowland and Perrett (1997) demonstrated a RT advantage for the recognition of computer-generated (photographic quality) caricatures of emotional facial expressions. They also showed that people are slower to categorize the expressions when their features are made less distinctive (i.e. an anti-caricatured representation). These results have been mirrored in recent brain-imaging work using the same caricatured expressions. Here, caricatures of fear and disgust were shown to engage different brain regions, with changes in neural activity being positively related to level of caricature ( Morris et al., 1996, Morris et al., 1998, Phillips et al., 1997 and Phillips et al., 1998). The caricature procedure has also been used to investigate the perception of other facial characteristics, including identity, attractiveness and age (Benson and Perrett, 1991a, Burt and Perrett, 1995, Calder et al., 1996, Perrett et al., 1994 and Rhodes et al., 1987). All of these studies have used the same basic process which operates by exaggerating the positions of set anatomical feature points relative to the locations of corresponding points on a reference norm face. The particular advantage of this procedure is that it is highly objective. Hence, although the system requires a number of anatomical landmarks to be identified on the to-be-caricatured face, these are of sufficient quantity, and in a sufficient variety of locations, to ensure that all aspects of the face's shape are exaggerated. In addition, the system exploits the fact that by changing a feature's position with respect to a reference norm, those features of the to-be-caricatured face that differ most from the norm (i.e. the distinctive features) are exaggerated the most, while features that differ minimally from the norm are exaggerated the least. This means that, to some extent, the choice of norm can govern which aspects of the face are exaggerated more than others. Consequently, investigations of identity caricaturing have generally used an average face norm (abstracted from a number of faces of the same sex and approximate age as the to-be-caricatured faces), because here the aim is to exaggerate the features that differentiate a face from the population average (e.g. big nose, thick eyebrows, etc.). For facial expression caricaturing, however, the aim is to exaggerate the distinctive features of the expression (e.g. wrinkled nose, raised eyebrows, etc.), while leaving the distinctive features of the person's face (e.g. big nose, thick eyebrows, etc.) relatively intact. Hence, Calder et al. (1997) exaggerated each facial expression relative to a picture of the same person posing a neutral expression (i.e. a neutral facial expression norm). In the experiments described in the latter half of this paper we explore the extent to which the choice of reference norm can influence the caricature effect for facial expression. But, first we investigate the psychological basis of this effect. One interpretation offered by Calder et al. (1997) is that facial expression caricaturing works by enhancing an expression's emotional intensity. In Experiment 1 we investigated this hypothesis by asking participants to rate caricatures of fear, happiness and sadness for their intensity of fear, happiness and sadness, respectively. Each set of expression caricatures was presented in a separate block along with caricatures of an expression that is occasionally confused with target images (e.g. disgust is on occasions confused with sadness), and a third set of facial expression caricatures that is not so readily confused with the target. All of the images were caricatured at seven levels of exaggeration (−75%, −50%, −25%, 0%, +25%, +50% and +75%). By using images caricatured by as much as +75%, we ensured that some of the images fell outwith the range of facial configurations seen in natural facial expressions. Rating studies with facial identity caricatures have shown that caricatured faces are perceived as significantly better likenesses of people than the original (undistorted) representations. However, the caricature advantage for goodness-of-likeness is small (typically in the range +4–16%), and caricaturing identity above these optimum levels produces faces that are judged as progressively worse likenesses of people. One possible outcome of Experiment 1, then, was that intensity ratings for expression caricatures would show a similar pattern. However, our own impression of the caricatured expressions instead concurs with the idea that caricaturing operates by enhancing emotional intensity; that is, emotional intensity appears to increase monotonically with increasing levels of caricature – even when the images are exaggerated to the point that they no longer resemble natural-looking faces. For example, as can be seen in Fig. 2, at higher levels of caricature smiling mouths can become exaggerated to almost twice their original size (middle row, +75%), and eyebrows raised in fear can move close to the centre of the brow (top row, +75%). Yet despite these considerable distortions the +75% expressions appear more emotionally intense than the lower levels of caricature. If our intuition was correct, this suggested an interesting hypothesis: that caricaturing should degrade the veridicality of the face (i.e. its ‘face-likeness’), but not the veridicality of the facial expression. To investigate this we asked a second group of participants to rate how ‘face-like’ the caricatures looked. We reasoned that if rated emotional intensity is found to increase as a function of level of caricature (even when the caricatures are so exaggerated that they are no longer regarded as ‘natural-looking’ faces), then this would give an insight into how expressions are coded. For instance, it would support the idea that facial expressions can be represented as continua as well as belonging to discrete categories. In addition, it would suggest that our representation of facial expression is coded independently of our representation of what is face-like. Experiment 2 used a similar design to determine whether this caricature effect was evident for all of the six basic emotions (happiness, sadness, anger, fear, disgust and surprise) from the Ekman and Friesen (1976) series of facial expressions. Both experiments demonstrated that enhancing the perceptual (structural) salience of a facial expression increases the intensity of the emotion displayed. Elaborating on this result, Experiments 3 and 4 used the caricature procedure to examine the predictions of two-dimensional models of facial affect recognition. 1.1. The perceptual representation of facial expressions A continuing debate in the emotion literature concerns the psychological basis of facial expression recognition. There are currently two main theoretical positions. One proposes that facial expressions are identified by registering their positions on two continuous underlying dimensions, and the second that they activate qualitatively discrete categories. Here we are particularly interested in the predictions of the two-dimensional model. The continuous-dimensions theory was originally put forward by Schlosberg, 1941 and Schlosberg, 1952 and was based on the observation by Woodworth (1938) that errors in facial expression recognition show relatively consistent patterns; for example, expressions of anger are more likely to be mistaken for disgust than for happiness or surprise, whereas expressions of fear are more readily confused with surprise than disgust, etc. Schlosberg, 1941 and Schlosberg, 1952 showed that these error patterns could be accommodated within a system comprising two continuous dimensions (pleasant–unpleasant and attention–rejection) with neutral at their cross-over point (origin); in a later paper a third dimension (sleepiness–arousal) was added (Schlosberg 1954). Over the years a number of similar dimensional accounts of facial affect recognition have been proposed. The most widely cited modern variant is the Circumplex model (Russell, 1980 and Russell and Bullock, 1985). This has a similar structure to the Schlosberg (1952) system, in that facial expressions are coded as values on two continuous dimensions, pleasantness and degree of arousal (Fig. 1). Russell has also shown that when the stimulus materials are emotional words, a similar Circumplex structure is found to that for facial expressions. This would suggest that these two-dimensional models have a strong conceptual basis. Nonetheless, some authors have presented evidence to suggest that similar dimensional systems can also accommodate the perceptual representation of facial expressions; that is, they have shown that dimensions describing the physical shape of facial expressions are correlated with dimensions such as pleasantness and arousal. Frijda (1969), for example, has shown that ratings of expressive facial features (upturned upper lip, corners of the mouth turned down, etc.) are correlated with the dimensions he identified as underlying the representation of emotion. More recently, Yamada and colleagues ( Yamada, 1993 and Yamada et al., 1993) showed that factor analyses (and discriminant analyses) of the physical displacement of feature points (e.g. corner of the mouth, corner of the eye, etc.) in schematic or human facial expressions reveals two principal dimensions which they labelled ‘slantedness’ and ‘curvedness/openness’. Furthermore, Yamada and Shibui (1998) have shown that values on these two structural scales are correlated with participants' ratings of the same stimuli for pleasantness and arousal, respectively. The Circumplex model of emotion representation; modified from Bullock and ... Fig. 1. The Circumplex model of emotion representation; modified from Bullock and Russell (1986). Vector 1 shows the anger expression caricatured relative to a neutral-expression norm. Vectors 2, 3 and 4 show the same anger prototype caricatured relative to happiness, fear and disgust expression norms, respectively. Note that the points depicting each emotion should be regarded as centroids of clusters; that is, each emotion does not have a precise set of co-ordinates. Figure options In short, there is a growing body of evidence to suggest that Russell's Circumplex model of facial affect recognition may constitute a valid description of both conceptual and perceptual (structural shape) representations of facial expression. But, if this type of two-dimensional model is to be accepted as a plausible ‘front-end’ of facial expression processing, it is important that it should stand up to detailed empirical testing. The caricature procedure offers a unique way of investigating this issue. 1.2. Caricaturing relative to different reference norms The basic phenomenon of caricaturing, say, an anger expression relative to a neutral-expression norm, can be represented in the type of two-dimensional model discussed by extending the vector formed between the origin (neutral) and anger; this is illustrated by vector 1 in the Circumplex model shown in Fig. 1. Note that anger (undistorted) is associated with moderate arousal and low pleasantness, and that extending the vector between neutral and anger has the effect of slightly increasing the emotion's arousal component while decreasing its pleasantness component; this concurs with the finding that people see anger caricatures as ‘more angry’ (Experiment 2). Thus, the two-dimensional model appears to provide an adequate perceptual account of caricaturing a facial expression's shape relative to a neutral reference norm. However, the computer-based caricature procedure enables one to use any face as the reference norm; it is not restricted to caricaturing relative to neutral. An anger expression, for example, can be caricatured relative to any other emotion by simply increasing the differences between the anger face and the expression in question. So what are the predictions of the two-dimensional model for the condition in which the reference norm is another canonical expression? Applying the principle used above, we can see that in a two-dimensional account, caricaturing an anger expression relative to a disgust-expression norm (vector 4) would have a different effect to caricaturing anger relative to the neutral-expression norm (vector 1). The disgust norm causes the anger expression's arousal value to increase while leaving its pleasantness value the same, and this has the effect of shifting the anger expression towards a region occupied by fear. When the norm is a fear expression (vector 3), however, anger is shifted in the opposite direction into the region occupied by disgust. A happiness-expression norm (vector 2), on the other hand, appears to have a similar effect to caricaturing relative to neutral. The important point conveyed by Fig. 1, then, is that a two-dimensional account predicts that caricaturing an expression relative to a series of ‘different-expression’ reference norms should create a series of images that are emotionally different to one another. To test this prediction Experiment 3 examined participants' ratings of caricatures of three facial expressions (anger, fear and sadness), each caricatured relative to a series of different reference norms. Contrary to the prediction of the two-dimensional model, we found that all of the reference norms used showed a linear relationship between rated intensity and level of caricature: caricaturing relative to any other emotion served only to increase the perceived intensity of a given emotion – it did not change that emotion. This finding is inconsistent with a two-dimensional model of perceptual representation of facial expression. In Experiment 4 we sought to replicate these findings using caricatures exaggerated by a higher level. This was to determine whether our failure to confirm the predictions of the two-dimensional model could be attributed to the maximum level of exaggeration used in Experiment 3 not being high enough. It is worth pointing out that the caricature procedure we used only manipulates a face's shape. Clearly, facial expressions contain more than just shape information, they also contain information relating to the face's texture (i.e. skin tone, whites of the eyes, etc.). However, as we discussed earlier, Yamada has shown that a factor analysis of facial measurements (i.e. shape information) alone generates a Circumplex structure similar to that observed by Russell and colleagues. This shows that emotion-relevant information sufficient for a Circumplex structure is coded in the shape of the facial expressions, and consequently, we felt that ‘shape caricatures’ provided an adequate test of the predictions of the two-dimensional model.
نتیجه گیری انگلیسی