درک ابراز هیجانی با استفاده از تجزیه و تحلیل عروضی از گفتار طبیعی: پالایش روش
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37954||2010||8 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Behavior Therapy and Experimental Psychiatry, Volume 41, Issue 2, June 2010, Pages 150–157
Abstract Emotional expression is an essential function for daily life that can be severely affected in some psychological disorders. Laboratory-based procedures designed to measure prosodic expression from natural speech have shown early promise for measuring individual differences in emotional expression but have yet to produce robust within-group prosodic changes across various evocative conditions. This report presents data from three separate studies (total N = 464) that digitally recorded subjects as they verbalized their reactions to various stimuli. Format and stimuli were modified to maximize prosodic expression. Our results suggest that use of evocative slides organized according to either a dimensional (e.g., high and low arousal – pleasant, unpleasant and neutral valence) or categorical (e.g., fear, surprise, happiness) models produced robust changes in subjective state but only negligible change in prosodic expression. Alternatively, speech from the recall of autobiographical memories resulted in meaningful changes in both subjective state and prosodic expression. Implications for the study of psychological disorders are discussed.
1. Introduction Emotional expression is essential to a wide range of human functions (Decety and Lamm, 2006, Gross, 2002 and LeDoux, 2000) and is compromised in a host of psychiatric and developmental disorders, such as schizophrenia (Cohen et al., 2008 and Cohen et al., 2005), depression (Leventhal, Chasson, Tapia, Miller, & Pettit, 2006) and autism (Matese et al., 1994 and South et al., 2008). Objectively measuring emotional expression has been an integral endeavor for a wide range of empirical pursuits. One particularly important method involves acoustic analysis of vocal expression. Despite considerable empirical work employing acoustic analysis of natural speech to understand expressivity in both pathological and nonpathological samples (Scherer, 2003 and Sobin and Alpert, 1999), the field has been limited by the lack of a standardized method of speech procurement. The Computerized Assessment of affect from Natural Speech (CANS; Cohen, Minor, Najolia, & Hong, 2009) is a laboratory-based procedure designed to measure both lexical and prosodic expression from natural speech across a range of evocative conditions. The CANS procedure involves having individuals generate free-speech in response to standardized stimuli. In this manner, the procedure is highly controllable in terms of mood-induction effects and is repeatable, while also being sensitive and applicable to a wide range of both clinical and non-clinical populations. This methodology also allows for generation of free-speech for lexical examination (see Cohen, Iglesias, et al., 2009 and Cohen, Minor, et al., 2009 for expansion on this point). Several recent studies have offered preliminary support for the CANS. First, two studies of adults found that prosodic expression generated in response to affectively valenced pictures from the International Affective Picture System (IAPS; Lang, Bradley, & Cuthbert, 2005) significantly changed across neutral, pleasant and unpleasant conditions. These CANS variables showed high temporal stability, suggesting they are a reliable index of individual differences in emotional expression. Changes in CANS variables also corresponded to subjective appraisals of state arousal but not valence (i.e., pleasant or unpleasant mood ratings). This suggests that prosodic expression reflects relatively specific nervous system activities that are distinct from subjective emotional states. Although the CANS methodology appears promising for a wide range of applications, the procedure is limited in that the magnitude of prosodic change, while statistically significant, has been marginal. Note that in the aforementioned CANS studies, the effect sizes between conditions were generally in the negligible range (i.e., <0.20; Cohen, 1988). Thus, one can question the degree to which the CANS can elicit meaningful prosodic changes. This may be a reflection, in part, of the fact that it was subjective arousal, not valence ratings that were associated to prosodic change, even though the stimuli were categorized solely on valence (neutral, pleasant, and unpleasant). This becomes an issue considering that valence and arousal are not necessarily linked (Cacioppo & Berntson, 1999). That is, visual stimuli with pleasant and unpleasant tones can have dramatically different arousal ratings. For example, a “cemetery scene” versus “mutilated bodies” would yield low and high arousal levels respectively, although both stimuli would be categorized as unpleasant. We thus turn to the circumplex model of emotion as a model for understanding variability in emotions – wherein emotional states are viewed as occupying a two-dimensional valence × arousal space. This is a well-established model validated for psychological research that may be more appropriate for conceptualizing prosodic change in that it accounts for both valence and arousal simultaneously (Russell, 1980 and Russell, 2009). In the first study presented here, our goal was to maximize prosodic expression across conditions. We thus administered a CANS procedure to healthy adults using stimuli that evoke separate high and low arousal with neutral, pleasant and unpleasant valences. While the circumplex model has been one of the primary concepts in fundamental emotion research to date, there is surprisingly little research on valence and arousal in terms of prosody. Rather, most prior prosody studies have conceptualized emotions using categorical models comprising distinct emotions. While a variety of models exist (e.g., Frijda et al., 1989 and Scherer, 1984; “natural kinds” model, Barrett, 2006 and Ekman et al., 1980) and there are differences in putative neurobiological or cognitive underpinnings across these models, a commonality across them is that emotions reflect discrete categories. Using this categorical taxonomy, different emotional states are reflected as categorically distinct “kinds” – anger, fear, surprise, happiness. Prosodic changes across these emotional states have been well documented (see Sobin & Alpert, 1999 for a review). For example, anger and fear have been associated with increases in inflection, amplitude and emphasis while sadness and disgust have been associated with declines in at least some of these variables (Scherer, 2003, Sobin and Alpert, 1999 and Ververidis and Kotropoulos, 2006). In a second study presented here, we employed a CANS procedure using stimuli that represent a range of distinct emotional states. With respect to maximizing prosodic changes across conditions, a final point to consider is the effectiveness of picture stimuli in evoking prosodic changes. While picture stimuli can effectively evoke a range of subjectively-reported emotional states (Lang et al., 2005), these ratings are not immune to demand characteristics. Thus, the degree to which emotion is actually experienced is unclear, especially since it is not strongly observed in prosodic change. A particular concern is that picture stimuli may not provide sufficient personal relevance for an individual. An alternative to using picture stimuli is to have subjects offer freely-generated autobiographical statements. While not as objective as picture stimuli, a free-recall condition may elicit stronger prosodic reactions because the task is more personally relevant. The third study presented here employs autobiographical free-recall to evoke prosodic changes. In summary, the present study indirectly compared three separate CANS formats to determine which was optimal for maximizing prosodic expression across conditions. We thus employed three experiments with the following conditions: (1) picture stimuli based on a dimensional “Circumplex” model of emotion (Russell, 1980); (2) picture stimuli based on a categorical model (Barrett, 2006 and Ekman et al., 1980); and (3) free-recall of autobiographical memories. Our method of evaluating these procedures is based on comparing prosody variables across the various conditions based on both statistical significance and magnitude of effect, using effect size statistics.