دانلود مقاله ISI انگلیسی شماره 37594
ترجمه فارسی عنوان مقاله

برخورد نقص های فردی با تعصبات شناختی: حافظه برای حالات چهره در مردان و زنان افسرده و مضطرب

عنوان انگلیسی
Interpersonal deficits meet cognitive biases: memory for facial expressions in depressed and anxious men and women
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
37594 2002 15 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Psychiatry Research, Volume 113, Issue 3, 30 December 2002, Pages 279–293

ترجمه کلمات کلیدی
تعصبات حافظه - افسردگی - حالات چهره - همبود - تفاوت های جنسیتی
کلمات کلیدی انگلیسی
Memory biases; Depression; Facial expressions; Comorbidity; Gender differences
پیش نمایش مقاله
پیش نمایش مقاله  برخورد نقص های فردی با تعصبات شناختی: حافظه برای حالات چهره در مردان و زنان افسرده و مضطرب

چکیده انگلیسی

Abstract Memory biases for negatively vs. positively valenced linguistic information in depression are well documented. However, no study so far has examined the relationship between depression and memory for facial expressions. We examined memory for neutral, happy, sad, and angry facial expressions in individuals suffering from comorbid depression and anxiety (COMs, N=23) or from anxiety disorders (ANXs, N=20) and in normal controls (NACs, N=23). Two main hypotheses were examined. First, we expected COMs, but not NACs, to exhibit an enhanced memory for sad and angry vs. happy expressions (negativity hypothesis). Second, we postulated that this bias would be specific to depression (disorder-specificity hypothesis). Data supported both these hypotheses. Specifically, COMs exhibited enhanced recognition of angry compared to happy expressions; in contrast, ANXs and NACs did not exhibit such enhancement. We also found that men showed a significantly better memory for angry vs. sad expressions, whereas women did not exhibit such a difference. The implications of these findings for the interpersonal processes involved in the maintenance of depression and anxiety are discussed.

مقدمه انگلیسی

. Introduction Until recently, models of depression maintenance tended to be neatly divided into cognitive (e.g. Beck, 1967 and Beck, 1976) and interpersonal (e.g. Coyne, 1976) types. A growing realization that such single-factor models cannot capture the complexity of human functioning has led to a more complete integration of cognitive and social factors involved in the maintenance of depression (e.g. Joiner, 2000 and Hammen, 1997). Interpersonal transaction is likely to interact with cognitive structures to ameliorate or exacerbate negative mood; cognitive processes are likely to interact with the social environment to contribute to the probability of engaging or disengaging in social contact. However, whereas the importance of combining basic research in interpersonal processes with basic research in cognitive processes was realized on a theoretical level, few attempts have been made to implement it empirically. One area that is surprisingly absent from such research concerns memory biases for interpersonal information. Filling this gap is the purpose of this study. Theorists from various perspectives proposed that depressed individuals should show a mood-congruent memory bias, that is, selective memory for material that is consistent with the depressed mood and/or concerns (e.g. Beck, 1967, Beck, 1987, Bower, 1987, Williams et al., 1988 and Williams et al., 1997). Indeed, memory biases for verbal information have been consistently found in both clinical and sub-clinical samples (e.g. Williams et al., 1997). Specifically, many studies have shown that clinically and sub-clinically depressed individuals are negatively biased in the recall of emotionally valenced verbal material (for review, see Matt et al., 1992). In a typical study, participants are presented with mood- or personality-relevant descriptive words (negative and positive trait adjectives), and are instructed to carry out self-referent processing of the words (e.g. Bradley and Mathews, 1983). A robust finding across these studies is that clinically depressed adults show higher recall for negative adjectives than for positive adjectives, whereas non-depressed subjects are either even-handed or positively biased in their recall (Bradley and Mathews, 1983, Bradley and Mathews, 1988, Mathews and Bradley, 1983, Bradley et al., 1995, Denny and Hunt, 1992, Derry and Kuiper, 1981, Elliott and Greene, 1992 and Watkins et al., 1992). However, the examination of memory biases in depressed individuals has hitherto been limited to verbal information. Surprisingly, no study so far has examined depressives’ memory for visually presented, interpersonally relevant information. Facial expressions of emotions seem particularly well suited for this purpose (Foa et al., 2000). First, facial expressions are ubiquitous and biologically significant (e.g. Hansen and Hansen, 1994 and Ekman and Friesen, 1976). Second, facial expressions of emotions are salient features of the interpersonal environment that are present in most interactions and are a powerful social stimulus (e.g. Buck, 1984 and Ekman, 1993). The gap in the study of memory for facial expression in depression is even more surprising given that recognition and interpretation of emotional expressions by depressed individuals have been a subject of considerable scrutiny. Although evidence is still conflicting, it has been reported that depressed patients showed an impaired ability to decode facial expressions (Gur et al., 1992, Mikhailova et al., 1996 and Rubinow and Post, 1992; but see also Lane and DePaulo, 1999). For example, Gur et al. (1992) found that depressed patients performed more poorly on measures of sensitivity for happy discrimination and specificity for sad discrimination and had a higher negative bias across tasks. They also found that severity of negative affect was correlated with poorer performance. Thus, they concluded that depression is associated with an impaired ability to recognize facial displays of emotion. Persad and Polivy (1993) assessed depressed individuals’ identification of and behavioral response to various emotional expressions. They also found that depressed individuals were impaired in the identification of emotional expressions as compared to normal controls (but not to other psychiatric controls). In that study, depressed individuals also reported more distress in reaction to emotional faces. For example, depressed individuals reported more freezing or tensing; higher fear and depression reactions; and less comfort with their own emotional reactions. Recently, Bouhuys et al. (1999) found that high levels of perception of negative emotions in schematic ambiguous faces, whether assessed at admission or at remission, were associated with relapse into depression 6 months thereafter. Importantly, this finding could not be accounted for by differences in other related variables such as type of depression, gender, initial severity of depression, duration of the index episode, residual symptoms at remission, differences in medication, or age. Based on these data, Bouhuys et al. argue that fundamental cognitive mechanisms specifically concerning nonverbal interpersonal stimuli are involved in depression relapse. In sum, memory biases for negative self-relevant information in depression on the one hand, and depressives’ sensitivity to emotional expressions on the other, point to the importance of examining memory for interpersonal information. Because depressed individuals as a group exhibit social interaction problems, and because negative memory biases were found with verbal information, we postulated that following an incidental memory task, depressed individuals would remember more negative (sad and angry) than non-negative (happy and neutral) faces, while controls would not exhibit this bias. We decided to include both sad and angry expressions as stimuli. Sad expressions were included because they are congruent to the participants’ internal feeling states, and thus constitute the ideal ‘mood-congruent’ stimuli. Angry expressions were included because studies examining attention found that depressed individuals’ preferentially process a wide range of emotional stimuli. For example, it has been found that depressed participants are selectively attentive not only to depressed-content words, but also to socially threatening words (Mathews et al., 1996), and show enhanced vigilance for anxiety-relevant words (Mogg et al., 1995). Moreover, recent findings indicate that depressed individuals exhibit selective attention to angry emotional expressions (e.g. Gilboa-Schechtman et al., 2003). The second goal of the present study was to examine the question of disorder-specificity with respect to memory for facial expressions. As is commonly noted, many similarities exist between depression and anxiety, raising the question of whether these disorders represent distinct nosological entities (e.g. Dobson, 1985 and Mineka et al., 1998). First, there exists a considerable overlap between measures of depression and of anxiety, with an average correlation of 0.61 (Dobson, 1985). Second, the average comorbidity rate of depression with various anxiety disorders (ANXs) is approximately 58% (Mineka et al., 1998). However, it has been postulated that depression, but not anxiety, should be associated with memory bias for negative information (Williams et al., 1997). This is due to different processing ‘modes’ affecting the operation of affective information in depression and in anxiety: elaborative processes, affecting encoding and retrieval processes, are postulated to be involved in depression; incorporation processes, affecting mostly attention and interpretation, are postulated to be involved in anxiety. Based on the predictions of the elaboration–incorporation theory, we postulated that negative biases in memory for emotional expressions would be specific to depression, and would not be found in anxiety. In this experiment, participants were first presented with images of individuals with neutral, happy, angry, and sad expressions and requested to indicate whether they would be willing (‘yes’) or unwilling (‘no’) to get acquainted with these individuals. Later, they were presented with the same (‘old’) images interspersed with images of the same individuals with different emotional expressions (‘new’) and asked to label each image as either ‘old’ or ‘new.’ Dependent measures were percent of correct vs. incorrect recognitions and latencies for making recognition decisions. We hypothesized that: (a) Depressed individuals would exhibit a bias favoring negative as compared to non-negative expressions whereas controls would not reveal such a bias (negativity hypothesis). Specifically, we predicted that depressed individuals would recognize more negative as opposed to non-negative expressions, whereas controls would not exhibit this bias. (b) Anxious individuals would be similar to controls and not to depressed individuals in their memory for facial expressions (disorder-specificity hypothesis).

نتیجه گیری انگلیسی

Results 3.1. Analysis of participants’ characteristics Information about demographic and psychopathological variables of all participants is presented in Table 1. As can be seen from Table 1, the groups did not differ on any demographic measure. To examine group differences in symptom level, we conducted a one-way MANOVA with Group (COMs vs. ANXs vs. NACs) as a between-subject variable, and the symptoms measures (BDI, BAI, STAI-T, and STAI-S) as the dependent measures. As expected, results indicated significant differences between the groups, F(8, 112)=14.14, P<0.001. Post hoc Tukey comparisons were also performed. Both clinical groups were significantly more depressed and anxious than the control group. The COM group was also significantly more depressed and anxious than the ANX group on all our self-report measures. Table 1. Means and standard deviations of demographic variables and symptomatology measures Group Comorbid Anxious Control (N=23) (N=20) (N=23) Age 32.87a (9.4) 37.30a (11.86) 32.91a (9.71) Education 12.54a (1.97) 12.84a (1.95) 12.69a (1.49) BDI 25.87a (7.24) 8.81b (6.17) 2.56c (2.25) BAI 27.09a (12.64) 15.19b (10.31) 2.65c (3.30) STAI-trait 56.43a (7.77) 41.84b (12.58) 28.43c (4.79) STAI-state 65.24a (6.93) 39.95b (13.29) 26.34c (4.89) For each variable, different subscripts represent statistical difference between the groups at P<0.05. Table options 3.2. General data analysis strategy The basic analysis undertaken in this study was a 3 (group: COMs, ANXs, NACs)×2 (gender: men vs. women)×4 (emotion: neutral, happy, sad, angry) ANOVA. We also performed the analyses with Set (Set A or Set B) and Gender of the picture (male vs. female) as additional (between and within subjects, respectively) variables. However, since no main effects or interactions with these variables were observed, they are not presented below. Since our analytic approach was hypothesis driven, we tested the individual hypotheses by conducting planned comparisons as follow-ups of the original omnibus ANOVA. To test the negativity hypothesis, follow-up planned comparison tested the group (2: COM vs. NACs)×valence of emotion (2: non-negative vs. negative) interaction term. To test the emotion-specificity hypothesis, the planned follow-up comparison tested the group (2: COMs vs. NACs)×type of emotion (2: angry vs. sad) interaction term. To test the disorder-specificity hypothesis, we used two different types of analysis: The first involved another planned comparison, this time contrasting the COM and the ANX groups while using a measure of anxiety (the BAI) as a covariate. Specifically, our planned comparison contrasted group (2: COM vs. ANXs)×valence of emotion (2: non-negative vs. negative). We chose the BAI measures as the covariate for three reasons. First, the BAI is a measure of clinically relevant anxiety, while the STAI-S is a measure of state anxiety, and thus might be less relevant for distress associated with anxiety disorder per se. Second, in our sample the correlation between the BDI and the BAI was 0.74, while the correlation between the BDI and the STAI-S was 0.89. Thus we reasoned that for our sample, the BAI is a ‘purer’, or more differentiated measure of anxiety than is the STAI-S. Finally, the BAI is the measure of anxiety most commonly used in studies comparing clinically comorbid populations to ‘pure’ groups of depressed and anxious individuals (e.g. Dozois and Dobson, 2001). The second method of testing the disorder-specificity hypothesis used the whole range of variability on both depression and anxiety. To this end, we conducted a mixed ANCOVA, with gender as a between-subject variable, emotion as a within-subject variable, and depression (as measured by the BDI) and anxiety (as measured by the BAI) as covariates. For this analysis we expected a significant emotion × BDI interaction, and a non-significant emotion×BAI interaction. The unpredicted interactions with participants’ gender were pursued in so far as the relevant interaction term of the omnibus ANOVA was significant. 3.3. Analysis of learning phase data Before proceeding to examine memory differences between our experimental groups, we investigated whether the groups indicated different learning patterns during the incidental learning phase on two measures: percentage of ‘meet’ responses and latency of the response. 3.3.1. Percentage data Table 2 presents the means and standard deviations of the percentage of ‘meet’ and ‘don't meet’ responses for each group, gender, and emotion. We examined the percentage of ‘meet’ decisions using the above-mentioned 3×2×4 ANOVA. Results indicated a significant main effect of emotion, F(1, 58)=16.88, P<0.001. Follow-up Tukey tests indicated that all individuals were more interested in getting acquainted with individuals expressing non-negative emotions than with individuals expressing negative emotions (33, 50, 19, 15%, for neutral, happy, angry, and sad expressions, respectively). A significant group×gender×emotion effect was identified, F(6, 118)=2.33, P=0.034. Post hoc comparisons indicated that with respect to non-negative emotions (neutral and happy), non-distressed females were more willing to meet other individuals than were men; however, this difference was eliminated in depressed individuals, and reversed in anxious individuals. No differential patterns were found with respect to sadness. With respect to anger, it seems that only non-distressed (i.e. control) men indicated some willingness to meet with these individuals, whereas all others indicated their desire to stay away. No other main effects or interactions were significant. Table 2. Means and standard deviations of the percentage of ‘yes’ answers to emotional faces in the learning stage by gender and by group Women Men All Group COM ANX NAC COM ANX NAC COM ANX NAC % ‘yes’ answers to neutral expressions 26 (31) 22 (25) 43 (27) 21 (24) 50 (29) 29 (16) 25 (29) 30 (29) 43 (26) % ‘yes’ answers to happy expressions 39 (37) 61 (37) 56 (38) 38 (36) 61 (37) 32 (32) 39 (37) 61 (37) 58 (39) % ‘yes’ answers to sad expressions 17 (32) 11 (27) 28 (34) 00 (00) 12 (30) 19 (24) 12 (28) 12 (27) 25 (31) % ‘yes’ answers to angry expressions 12 (29) 14 (32) 20 (34) 21 (24) 08 (20) 39 (34) 15 (27) 12 (28) 26 (34) Table options 3.3.2. Decision latency data Incorrect decision times (1.5% of decision times that were less than 400 ms or more than 4000 ms) were eliminated from the analyses. We examined whether the groups differed in the times they examined angry, sad, happy, or neutral facial expressions using the 3×2×4 ANOVA. This analysis revealed a main effect of gender, such that women were faster to make their decisions than were men, 1253 vs. 1406 ms, F(1, 53)=4.74, P=0.031. No other main effects or interactions were significant (all Fs<1.5, Ps>0.20). Thus, there were no differences in the times that the three experimental groups examined the to-be-remembered stimuli. 3.4. Analysis of recognition data Percentage of correct identification of old and new facial expressions (i.e. mean percentage of true positives and true negatives), percentage of false positives (percentage of decisions involving mis-recognition of new faces as old), false negatives (i.e. percentage of decisions involving mis-recognition of old faces as new), and signal detection parameters D′ and C were calculated for each individual and for each emotion. For the present study, signal detection estimates were computed according to the methods presented by MacMillan and Creelman (1990). The sensitivity of response accuracy index D′ was computed based on the combination of hit and false alarm rates. The response bias C was used as an indicator of the degree to which participants made false-positive vs. false-negative decisions. The means and standard deviations of these indices are presented in Table 3. Table 3. Means and standard deviations of hit rate, false alarms and parameters of signal detection (D′ and C) as a function of emotion for comorbid, anxious, and control participants Emotion COM ANX NAC Hit rate Neutral 0.77a (0.21) 0.82a (0.17) 0.78a (0.24) Happy 0.62a (0.38) 0.85b (0.26) 0.88b (0.19) Angry 0.96a (0.13) 0.76b (0.31) 0.71b (0.30) Sad 0.76a (0.27) 0.69ab (0.31) 0.51b (0.35) False alarms Neutral 0.42a (0.27) 0.47a (0.29) 0.49a (0.34) Happy 0.39a (0.38) 0.33a (0.36) 0.34a (0.40) Angry 0.34a (0.38) 0.33a (0.41) 0.24a (0.31) Sad 0.53a (0.36) 0.33ab (0.36) 0.28b (0.36) D′ Neutral 1.18a (1.28) 1.64a (1.05) 1.19a (1.09) Happy 1.39a (1.27) 2.94b (1.58) 2.52b (1.70) Angry 3.55a (1.93) 2.20b (1.90) 2.20b (1.52) Sad 1.45a (1.62) 1.60a (1.64) 1.10a (1.55) C Neutral 0.34a (0.89) 0.59a (1.12) 0.91a (1.15) Happy 0.56a (1.64) 0.81a (1.26) 1.10a (1.31) Angry 1.20a (1.02) 0.56ab (1.26) 0.43b (1.19) Sad 0.97a (1.32) 0.34ab (1.52) 0.07b (1.41) Means with different subscripts are significant at P<0.05 level, when controlling for the gender variable. Table options 3.5. Correct recognition To examine the memory patterns for emotional expressions, we calculated the percentage of correct recognitions (hit rate) for angry, sad, happy, and neutral facial expressions.1 A 3 (group)×2 (gender)×4 (emotion) ANOVA was conducted on the hit-rate measure. A main effect of emotion was identified, F(3, 58)=4.01, P=0.012. Post hoc Tukey tests indicated that the hit rate for sad expression was lower than those for the neutral, happy, and angry expressions (64 vs. 75%, 79 vs. 82%, respectively). A gender×emotion interaction was significant, F(63, 57)=2.68, P=0.050. To clarify the nature of this interaction, separate sets of analyses were performed for men and women. While men had more difficulty correctly recognizing sad vs. all other expressions (57, 87, 87, 75% for sad, angry, happy and neutral expression, respectively, F(3, 16)=3.29, P=0.049), women did not exhibit such difficulties (72, 77, 72, 81%, for sad, angry, happy and neutral expression, respectively, F(3, 43)=1.93, P=0.13). To examine the negativity hypothesis, we conducted a planned comparison in COM and NAC groups. A significant group×valence (non-negative vs. negative) interaction was identified, F(1, 43)=17.46, P<0.001. Planned t-test comparisons indicated that while COMs had a greater hit rate for negative (angry and sad) than for non-negative (happy and neutral) expressions, NACs exhibited an opposite pattern. Thus, our negativity hypothesis was supported by the hit-rate data. To examine the disorder-specificity hypothesis, we have conducted two types of analyses, first using a 2×2×2 ANCOVA, and then using a 2×2 ANCOVA. Results of the first strategy indicated a group (COM vs. ANX)×valence interaction, F(1, 38)=10.52, P<0.001. Planned t-test comparisons indicated that, while COMs had a higher hit rate for the negative vs. positive expressions, ANXs did not exhibit this pattern. Results using the second strategy indicated a significant emotion×BDI interaction, F(3, 60)=4.02, P<0.01. In contrast, the emotion×BAI interaction was not significant (F<1). Table 4 presents the correlations between measures of memory and measures of depression (BDI) and of anxiety (BAI). As revealed by these data, depression was associated with a strong negative correlation with hit rate for happy emotional expressions, and strong positive correlations with angry and sad emotional expressions. No such association was found with the BAI when the BDI was statistically controlled. Thus, results for the hit-rate measure supported the disorder-specificity hypothesis. Table 4. Correlations between memory parameters (hit rate, false alarm, memory strength D′ and report bias C) for each emotion and measure of distress (BDI, BAI; BAI controlling for BDI; BDI controlling for BAI) Correlations with BDI Correlations with BAI Zero-order correlations Neutral Happy Angry Sad Neutral Happy Angry Sad Hit rate 0.02 −0.39** 0.34** 0.34** −0.04 −0.29** 0.27** 0.25* False alarm −0.03 0.03 0.39** 09.15 −0.001 0.01 0.24* 0.24* D′ −0.09 −0.31** 0.25* 0.03 −0.02 −0.29** 0.14 0.15 C −0.07 −0.11 0.29** 0.41** −0.14 −0.14 0.23 0.28* Partial Correlations with BDI Correlations with BAI correlations controlling for BAI controlling for BDI Hit rate 0.09 −0.27* 0.24* 0.21 −0.09 0.14 0.00 0.01 False alarm −0.04 0.04 −0.05 0.32** −0.03 −0.03 0.20 −0.08 D′ 0.14 −0.15 0.22 0.17 −0.09 −0.09 −0.07 0.21 C −0.02 −0.01 0.19 0.32** −0.07 −0.10 0.03 −0.05 * Significant at P<0.05 level. ** Significant at P<0.01 level. Table options 3.6. False alarms We calculated false alarm scores for angry, sad, and happy facial expressions. A 3 (group)×2 (gender)×4 (emotion) ANOVA was conducted on this measure. A main effect of emotion was identified, F(3, 60)=4.66, P<0.01. Planned comparisons indicated that there were more false alarms for neutral than for emotional expressions. No other main effects or interactions were identified (all Fs<1.2, n.s.). 3.7. Analysis of signal detection data 3.7.1. Discriminability index (D′) The hypotheses about negative emotional expressions were also examined using the discriminability measure D′. In the present context, D′ distinguishes between old and new stimuli, and is thus a measure of retention. We calculated mean D′ scores for each emotional expression for each individual. These D′ scores were submitted to a 3 (group)×2 (gender)×4 (emotion) ANOVA. Results revealed a main effect of emotion, F(2, 69)=8.24, P<0.05. Tukey comparisons indicated that the discriminability of happy and angry expressions was higher than that of sad expressions. Results also revealed an emotion×gender interaction, F(3, 57)=2.84, P=0.04. To clarify the nature of the interaction, separate sets of analyses were performed for men and for women. While men had more difficulty discriminating sad, as compared to angry or happy but not neutral expressions (D′ values for the emotions were 0.97, 3.01, 2.26 and 1.26, respectively, F(3, 16)=10.18, P<0.001), women recognized angry and sad expressions equally well, and better than non-negative expressions (D′ values were 1.78, 2.28, 1.98 and 1.38 for sad, angry, happy, and neutral expressions, respectively, F(3, 43)=4.61, P=0.007). To examine the negativity hypothesis, we conducted a planned comparison with the COM and NAC groups. A significant group×valence interaction was identified, F(1, 43)=7.46, P<0.01. Planned comparisons indicated that while COMs remembered more negative than non-negative expressions, NACs exhibited an opposite pattern. Thus, our negativity hypothesis was supported by the D′ data as well. To examine the disorder-specificity hypothesis, using the first strategy, we examined the group×valence interaction term of the appropriate ANCOVA. A significant interaction was identified, F(1, 38)=6.16, P<0.01. Planned comparisons indicated that COMs remembered more negative than non-negative expressions, whereas ANXs did not exhibit this pattern. The examination of this hypothesis using the second strategy yielded a significant emotion×BDI interaction, F(3, 60)=3.24, P<0.05. The emotion×BAI interaction did not reach significance, F(3, 60)=1.97, n.s. As can be seen from Table 4, depression was associated with a strong negative correlation with memory strength of happy emotional expressions, and strong positive correlations with memory strength for angry and sad emotional expressions. No such pattern was observed for the BAI after the BDI was statistically controlled for. 3.7.2. Response bias (C) To examine whether COMs differ from NACs in their decision criteria concerning recognition of emotional expression, we examined the relative propensity to make false-positive decisions vs. false-negative decisions (C). In this context, a higher C value corresponds to a relatively higher propensity for identifying old faces as new (false-negatives) as compared to the propensity for identifying new faces as old (false-positives). We calculated C scores for each emotional expression and each individual. These scores were submitted to a 3 (group)×2 (gender)×4 (emotion) ANOVA. Results revealed a significant group×emotion interaction, F(3, 57)=3.66, P<0.001. No other main effect or interaction reached significance. To examine the negativity hypothesis, a planned comparison involving the COM and NAC groups was performed. A significant group×valence interaction was identified, F(1, 43)=16.88, P<0.001. Planned comparisons indicated that while COMs were more conservative when making judgments for negative (sad and angry) expressions than for non-negative expressions (i.e. happy and neutral), NACs exhibited an opposite bias. To examine the disorder-specificity hypothesis, we again conducted two types of analyses. First, we contrasted the COM and the ANX groups while using a measure of anxiety (the BAI) as a covariate. A significant emotion×group interaction was identified, F(1, 38)=8.12, P<0.001. Examination of the disorder-specificity hypotheses using the second strategy revealed a significant emotion×BDI interaction, F(3, 59)=3.59, P<0.05. The emotion×BAI interaction was not statistically significant, F<1. 3.8. Analysis of decision latency data Decision latencies of correctly identifying images as new (not appearing in phase I) and as old (appearing in phase I) were computed separately for each individual and each emotion. Incorrect decision times (1.5% of decision times that were <400 ms or more than 4000 ms) were eliminated from the analyses. Previous research has found that the recognition of previously seen images is an easier task than the recognition of new images (e.g. Bower and Karlin, 1974). To examine whether the present data would yield the same pattern, mean decision latencies of ‘old’ and ‘new’ images were computed for each individual. These scores were analyzed using a 2 (presentation: old vs. new)×2 (emotion)×3 (group) ANOVA. Results indicated a significant effect of presentation, such that ‘old’ stimuli were identified faster than ‘new’ stimuli (1602 vs. 1747 ms, F(1, 63)=5.56, P<0.01). In addition, a significant main effect of emotion was found, such that decisions regarding neutral and happy expression took longer than decisions regarding angry and sad expressions (decision latencies were 1739, 1702, 1660 and 1596 ms for neutral, happy, sad and angry expressions, respectively, F(3, 58)=4.44, P<0.01). No other main effects or interactions reached significance (all Fs<1.7, n.s.).