استراتژی های جبرانی در پردازش احساسات صورت: شواهدی از پروزوپاگنوزیا
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37884||2006||9 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Neuropsychologia, Volume 44, Issue 8, 2006, Pages 1361–1369
Abstract We report data on the processing of facial emotion in a prosopagnosic patient (H.J.A.). H.J.A. was relatively accurate at discriminating happy from angry upright faces, but he performed at chance when the faces were inverted. Furthermore, with upright faces there was no configural interference effect on emotion judgements, when face parts expressing different emotions were aligned to express a new emergent emotion. We propose that H.J.A.'s emotion judgements relied on local rather than on configural information, and this local information was disrupted by inversion. A compensatory strategy, based on processing local face parts, can be sufficient to process at least some facial emotions.
نتیجه گیری انگلیسی
Conclusion The prosopagnosic patient H.J.A. was able to discriminate between happy and angry emotions in static, full faces when they were upright, though performance was impaired relative to controls. He also showed an inversion effect, with performance falling to chance when faces were inverted. With facial composites H.J.A.'s emotion judgements were less accurate again, though he still performed above chance when judging the emotions expressed in the bottom halves of faces. However, his emotion judgements were not influenced by whether the halves were presented within a composite or noncomposite face. The control participants were also affected by inversion with full faces, but, unlike H.J.A., they were also disrupted when face halves were part of a composite relative to a noncomposite image. The disruptive effect of the composite was eliminated when faces were inverted, linking the disruptive effect to configural cues emerging from composite, upright faces. These data indicate that there can be a residual ability to judge facial emotions at an above chance level, even with a patient with severe prosopagnosia who is apparently unable to access any stored knowledge based on the structural identity of faces (e.g., in familiarity judgements, or in tests sensitive to implicit knowledge about faces; e.g., Lander et al., 2004). However, this does not mean that facial information about emotions is processed normally in such a case. Our study indicates that H.J.A.'s processing of facial emotion differed qualitatively from that of control participants. Control participants appeared sensitive to configural information present in whole, upright faces (see also Calder et al., 2000). H.J.A. did not, since he was unaffected by our manipulation contrasting composite with noncomposite faces. Instead, we suggest that H.J.A. based his emotion judgements on the presence of critical local features. There was particular weight placed on features in the lower half of the face, but there was also some contribution from features in the upper region when they matched features in the lower region. This may represent a residual, feature-based process that is present when normal participants make judgements to facial emotion, with the process being revealed when the extraction of configural information is disrupted by a brain lesion. Alternatively, it may represent a compensatory strategy developed by H.J.A., perhaps even linked to his spared ability to use facial motion to make emotion judgements (Humphreys et al., 1993). For example, movements of the mouth may be particularly salient when people change emotional expression, leading to H.J.A. weighting that region strongly even when asked to make emotional judgements to static images. Whatever the case, the important point is that we should be cautious to infer functionally separate processes for extracting facial emotion and identity from a case such as H.J.A.'s, where identity judgements are at floor but emotion judgements are above chance. This does not mean that emotion judgements operate in a normal manner. Given that H.J.A. showed no sign of using configural information (Experiment 2; see also Boutsen & Humphreys, 2002; Humphreys & Riddoch, 1987; Young et al., 1994), it is of interest that he was strongly affected by face inversion, in Experiment 1. This in turn suggests that inversion effects are not solely due to the loss of configural cues, but they can also come about because the processing of local facial features is sensitive to their familiar orientation. The degree to which a feature-based strategy can play a role in emotion judgement probably also depends on the choice of emotions being tested. Here we examined the contrast between angry and happy faces, and feature-based cues may be a relatively reliable means of distinguishing these two emotions. As finer distinctions are required, we may expect that emergent, configural cues will play a more important part. This requires empirical testing. Another point is the fact that H.J.A. mainly relied on the bottom half of the face to recognise facial emotion. In recent studies, Caldara et al. (2005) have reported a similar observation with another prosopagnosic patient engaged in face recognition. Moreover, Bukach, Bub, Gauthier, and Tarr (2006) reported that it is possible to main observe a ‘local expertise effect’ in prosopagnosia, suggesting configural processing but over a local region. The question raised by such observations is whether HJA did process local configural information from the mouth region, which would explain why there is an inversion effect. This hypothesis needs further investigation. Nevertheless, even if this assumption is verified, it remains the case that he used an abnormal strategy to perform the task. The data reported here emphasise the importance of showing that face processing is qualitatively similar in patients and controls, before judgements are made about whether dissociations reflect a difference between the computational uses to which common information is put (e.g., for accessing facial identity relative to facial emotion). In the present case, we suggest that there is a difference in the way facial features can be used to make contrasting judgements, but there is not necessarily a difference between processing facial identity and emotion. A failure to demonstrate qualitative similarities between a residual ability in a patient and the normal process in controls means that it is possible to challenge the view that two distinct and/or independent regions sustain identity and emotion processing (e.g., Baudouin, Martin, Tiberghien, Verlut, & Franck, 2002; Ganel & Goshen-Gottstein, 2004; Martin, Baudouin, Tiberghien, & Franck, 2005; Schweinberger, Burton, & Kelly, 1999; Tiberghien, Baudouin, Guillaume, & Montoute, 2003).