دانلود مقاله ISI انگلیسی شماره 37934
ترجمه فارسی عنوان مقاله

ابراز هیجانی ظریف و قوی از شخصیت های مصنوعی

عنوان انگلیسی
Subtle emotional expressions of synthetic characters
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
37934 2005 14 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : International Journal of Human-Computer Studies, Volume 62, Issue 2, February 2005, Pages 179–192

ترجمه کلمات کلیدی
ابراز هیجانی - ظریف و قوی - شخصیت های مصنوعی
کلمات کلیدی انگلیسی
Subtle .emotional expressions .synthetic characters
پیش نمایش مقاله
پیش نمایش مقاله  ابراز هیجانی ظریف و قوی از شخصیت های مصنوعی

چکیده انگلیسی

Abstract This study examines the influence of the geometrical intensity of an emotional facial expression on the perceived intensity and the recognition accuracy. The stimuli consisted of synthetic faces at ten geometrical intensity levels in each of the five emotional categories. A curve–linear relationship was found between geometrical and perceived intensity. Steps of 20% geometrical intensity appear to be appropriate to enable the participants to distinguish the intensity levels. At about 30% geometrical intensity the recognition accuracy reached a level that was not significantly different from each emotions maximum recognition accuracy. This point indicates a categorical perception of the facial expressions. The results of this study are of particular importance for the developers of synthetic characters and might help them to create more subtle characters.

مقدمه انگلیسی

1. Introduction Many synthetic characters are used for entertainment, communication, and work. They range from movie stars (Thomas and Johnson, 1981) and pets (Sony, 1999) (see Fig. 1) to helper agents (Bell et al., 1997) (see Fig. 1) and avatars for virtual cooperative environments (Isbister et al., 2000). Characters can also have a physical body, e.g. robots. The range of robots is very wide and therefore this paper focuses on robots that interact with humans and not on industrial or military robots. The interesting robots for this study help the elderly (Hirsch et al., 2000), support humans in the house (NEC, 2001), improve communication between distant partners (Gemperle et al., 2003) and are research vehicles for the study on human–robot communication (Okada, 2001; Breazeal, 2003). A survey of relevant characters is available (Bartneck, 2002). Aibo, eMuu and microsoft agent. Fig. 1. Aibo, eMuu and microsoft agent. Figure options The ability to communicate emotions is essential for a natural interaction between characters and humans because it is not possible not to communicate. The absence of a character's emotional expressions could already be interpreted as indifference towards the human. Therefore, it is important that characters express their emotional state. Some of these characters can express emotions to improve the interaction between the character and the user (Bartneck, 2003; Breazeal, 2003) (see Fig. 1) or to visually support synthetic speech (CSLU, 1999). The CWI institute in Amsterdam developed a talking screen character that is able to express emotions based on an emotion disc (Ruttkay et al., 2000). Three parameters and their interaction are important for the design of emotional expressions for characters: geometrical intensity, perceived intensity and recognition accuracy. We will now take a closer look at the three parameters. 1.1. Geometrical intensity The synthetic face has certain components, such as eyebrows and a mouth, which can be manipulated. Usually, a maximum for each emotional expression is defined by reproducing already validated faces, such as the well-known Ekman faces (Ekman and Frieser, 1976). The spatial difference of each component between the neutral and the maximum expression is then divided into equal parts. To express 30% happiness, for example, the components are moved 30% of distance between neutral and maximum. 1.2. Perceived intensity Humans are able to judge the intensity of a human's or character's expression. Several studies have been carried out in which participants evaluated expressions (Etcoff and Magee, 1992; Hess et al., 1997). 1.3. Recognition accuracy Each emotional expression has a certain distinctness, which can be measured by the recognition accuracy of humans observing the expression. In this study, when we refer to recognition accuracy, we do not mean the differentiability between intensity levels within one emotion. We mean the differentiability between emotion categories measured as recognition rate. In such recognition tests, the participants have to identify which emotion was expressed. Low-intensity expressions are usually less distinct (Etcoff and Magee, 1992; Bartneck, 2001) but can play an important role in human communication (Suzuki and Bartneck, 2003). 1.4. Focus of this study We now take a look at the relationships of these three parameters. Clearly, the geometrical intensity has a direct influence on the perceived intensity and the recognition accuracy of the expression. The closer the emotional expression is to its maximum the higher is the perceived intensity of the expression. However, it cannot be assumed that this relationship is as simple as the function perceived intensity=geometric intensity. A 30% geometrical intense expression of happiness may not be perceived to be 30% intense or correctly recognized in 30% of the cases. This study attempts to shed some light on this particular relationship. 1.5. Research questions Based on the background given above we would like to define the three research questions of this study: 1. What is the relationship between the geometrical and perceived intensity? 2. What is the influence of the geometrical intensity on the recognition accuracy of the expression? 3. What is the relationship between perceived intensity and the recognition accuracy of the expression? 1.6. Relevance of this study With this study we hope to provide a better insight into the perception of the emotional expressions of synthetic characters. Synthetic characters are used to an increasing degree in computer games, virtual environments, or robots. The results could be of great interest to the developers of these characters and might help them to gain more control of their designs. 1.7. Related work Hess et al. (1997) studied the relationship between the physical intensity of an emotional expression and the perceived intensity and the recognition of that expression using pictures of natural faces as stimuli. They changed the physical intensity by combining a neutral face with an intense expression of an emotion using graphic morphing software in 20% steps. This is problematic since it is impossible to control how the morphing software merges the pictures and therefore generates steps of 20% intensity. Hess et al. found a significant main effect of physical intensity for both perceived intensity and recognition accuracy. With increasing physical intensity, perceived intensity increased in a linear way. For recognition accuracy a significant linear and quadratic trend was found. Furthermore, task difficulty was rated lower for higher intensities. Besides, happiness was the easiest to recognize and it was recognized the best: almost 100% correct identifications even for low physical intensities. This happy face advantage has been reported before (Ekman and Friesen, 1971). Hess et al. argue that their results support the theory of categorical perception only for happiness, not for the other emotions. In our study, we hope to replicate their results regarding the perceived intensity with different stimuli, namely schematic faces. Regarding the recognition accuracy, we want to find out if we can support a categorical or a dimensional perception of emotional expressions. In the present study, however, we do not use the critical morphing procedure to create different intensity levels. Instead, we use an animation tool as described in the Methodology section below. Differences in identification of emotions between natural and synthetic faces was researched by Kätsyri et al. (2003). They found that emotional expressions shown by a synthetic talking head that they developed (Frydrych et al., 2003) was recognized worse than emotional expressions displayed by natural faces. This suggests that synthetic faces are not an adequate alternative for emotions research. On the other hand, there is research that shows that emotional expressions by synthetic faces are recognized as well or even better than emotions on natural faces (Katsikitis, 1997; Bartneck, 2001). Another aspect of emotional expressions is of interest to this study. The space of human emotions is frequently modeled either with dimensions, such as arousal and valence ( Schlossberg, 1954; Osgood et al., 1957; Russel, 1979; Hendrix et al., 2000) or in categories such as happiness and sadness (Ekman et al., 1972; Izard, 1977; Plutchik, 1980). It has already been shown that a two-dimensional space is insufficient to accurately model the perception of emotional facial expressions (Schiano et al., 2000). Etcoff and Magee (1992) showed that emotional facial expressions are perceived categorically. They used line drawings of emotional faces to study the relationship between physical intensity of an emotional facial expression and the recognition. They had their subject identify an emotion on 11 evenly spaces facial expression continua. The continua were based on merging either a neutral face with an emotional expressive face or on merging two faces with different emotional expressions. It was found that emotions were perceived categorically, except for surprise. That means that small physical differences in emotional facial expressions are easier to distinguish when at boundaries between emotions and harder when within one emotion category. In our study, we only use neutral—emotion continua for 5 emotions. We expect to find a boundary for each emotion where it is possible to recognize an expression as a particular emotion.

نتیجه گیری انگلیسی

Conclusions We conducted a study of synthetic facial expressions that explored the relationship of geometrical intensity, perceived intensity and recognition accuracy. Our results show that it is possibly to communicate emotions also at low intensity levels and thereby enable characters and robots to act more subtle. Fear and happiness remain two special emotional categories for facial expressions. The happy-face advantage shows how sensitive humans are in perceiving positive expressions. Since the repertoire of positive expressions is limited to smiling it is good to know that it is also correctly recognized at low intensities. Fear is a problematic expression since it is difficult to recognize and to judge its intensity. The results of our study indicate that emotional expressions might be perceived categorically. The strong increase of recognition accuracy at about 30% geometrical intensity could be interpreted as categorical perception as described by Etcoff and Magee (1992). However, we only explored facial expression between neutral face and most intense face for each emotion and not between two different emotions. Therefore, our results can only be an indication.