دانلود مقاله ISI انگلیسی شماره 37918
ترجمه فارسی عنوان مقاله

بازسازی مدل های ذهنی پویا از حالات صورت در پروزوپاگنوزیا نشان دهنده بازنمایی هویت متمایز برای و ابراز

عنوان انگلیسی
Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
37918 2015 15 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Cortex, Volume 65, April 2015, Pages 50–64

ترجمه کلمات کلیدی
پروزوپاگنوزیا - حالات صورت از احساسات - همبستگی معکوس
کلمات کلیدی انگلیسی
Prosopagnosia; Facial expressions of emotion; Reverse correlation
پیش نمایش مقاله
پیش نمایش مقاله  بازسازی مدل های ذهنی پویا از حالات صورت در پروزوپاگنوزیا نشان دهنده بازنمایی هویت متمایز برای و ابراز

چکیده انگلیسی

Abstract The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus – pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation

مقدمه انگلیسی

1. Introduction The human face transmits a wealth of visual signals relevant for the identification and the categorization of facial expressions of emotion. The brain, as a decoder, flexibly filters the incoming visual information transmitted by the face to rapidly achieve complex perceptual categorizations (Schyns, Petro, & Smith, 2009). For example, the uniqueness of facial features characterizing a given individual, and their overall organization in the face, constitute the core information for identification and also for dissociating familiar from unfamiliar faces. Other signals can also be extracted from faces, such as the cues disclosing age (e.g., George & Hole, 1995), gender (e.g., Brown and Perrett, 1993, Ekman and Friesen, 1976, Ekman and Friesen, 1978, Schyns et al., 2002 and Tranel et al., 1988), race (e.g., Caldara and Abdi, 2006, Caldara et al., 2004, Vizioli et al., 2010a and Vizioli et al., 2010b) and emotional state (e.g., Bruce and Young, 1986, Calder and Young, 2005, Ekman and Friesen, 1976, Ekman and Friesen, 1978 and Smith et al., 2005). Overt emotional states can also be extracted from face signals; they are mostly conveyed by facial expressions of emotion. The basic signals (i.e., “happy,” “surprise,” “fear,” “disgust,” “anger,” and “sad”) are only weakly correlated with each other to minimize confusions for their decoding (Smith et al., 2005), and we recently reported cross-cultural tunings in the way the emotion signals are transmitted and decoded (Jack et al., 2009, Jack et al., 2012a and Jack et al., 2012b). Yet, a fundamental question remains unresolved: does the face information used to recover identity and emotional expressions tap into common or distinct representational systems? According to influential cognitive (Bruce & Young, 1986) and neuroanatomical (Haxby, Hoffman, & Gobbini, 2000) models of face processing, two distinct functional and neural systems accomplish the recognition of facial identity and facial expression. The first system – performing facial identification (Haxby et al., 2000) – is proposed to mainly involve the inferior occipital gyri and lateral fusiform gyrus, whereas the second system – performing facial expression categorization – is proposed to involve the inferior occipital gyri, the posterior superior temporal sulcus (pSTS) and the amygdala (for a review see, Calder and Young, 2005 and Pessoa, 2008). However, some authors have questioned the idea of independence between those systems, by mainly relying on results from computational modelling and neuroimaging evidence (Calder, 2011 and Calder and Young, 2005). A single model based on a Principal Component Analysis (PCA) can achieve independent coding of facial identity and facial expression, suggesting the possible existence of a multidimensional system, with a more partial than absolute independence (Calder, Burton, Miller, Young, & Akamatsu, 2001). These simulations have thus challenged the view of an independence between the coding for identity and expression, at least suggesting that those models are less strongly supported than what is often assumed (Calder & Young, 2005). In line with this position, Palermo, O'Connor, Davis, Irons, and McKone (2013) have recently put forward the theory of a first common step in the processing of expression and identity, and the occurrence of a splitting at a later stage; a view that is in agreement with the functional involvement of the inferior occipital gyrus as the entry level for both tasks (Calder and Young, 2005, Haxby et al., 2000 and Pitcher, 2014). However, even though a neural dissociation for the processing of identity and emotional expression is supported by electrophysiological studies in primates (e.g., Hasselmo, Rolls, & Baylis, 1989) functional neuroimaging in humans (e.g., Winston, Henson, Fine-Goulden, & Dolan, 2004) and brain-damaged patients (Haxby et al., 2000), recent evidence suggests that the neural computations occurring in the inferior occipital gyrus and the right pSTS are functionally distinct and have a causal involvement in processing facial expressions (Pitcher, Duchaine, & Walsh, 2014). To sum up, more evidence is necessary to clarify this debate and, as acknowledged by Calder and Young (2005), further studies with brain-damaged patients are necessary to probe the hypothesis of distinct visuoperceptual systems for facial identity and facial expression categorization. Following brain lesions, some patients lose the ability to recognize facial identity, despite no other obvious impairments of the visual system and a preserved identification via other modalities (e.g., voice, gait and so forth). The specificity of this face recognition deficit is spectacular, rare and has elicited considerable attention within the neuropsychological literature since the first clinical observations (Quaglino, 1867 and Wigan, 1844) and the introduction of the term prosopagnosia by Bodamer (1947). Acquired prosopagnosia typically follows brain damage to bilateral occipitotemporal areas (e.g., Damasio et al., 1982, Farah, 1990, Landis et al., 1988 and Sergent and Signoret, 1992). Anatomical descriptions of prosopagnosia endorse the necessary and sufficient role of the right hemisphere ( Landis et al., 1988 and Sergent and Signoret, 1992) in the occipitotemporal pathway of face processing (for a review see, Bouvier & Engel, 2004). The clinical and anatomical conditions of prosopagnosia have always received great interest in cognitive neuroscience, as they clarify the neurofunctional mechanisms of normal face processing. The different sub-functions of the cognitive architecture of face processing have been isolated by the occurrence of distinct double dissociations in brain-damaged patients, for instance: a functional segregation between the ability to recognize unfamiliar and familiar faces (e.g., Malone, Morris, Kay, & Levin, 1982) and between lip reading and face identification ( Campbell, Landis, & Regard, 1986). Yet, the neuropsychological literature remains controversial on the spared ability of prosopagnosic patients to identify facial expressions despite their impairment to recognize facial identity, and on patients showing impaired facial expression recognition with preserved facial identity recognition (for a detailed review see, Calder, 2011). Some acquired prosopagnosic patients showed a marked impairment in the categorization of facial expressions ( Bowers et al., 1985, De Gelder et al., 2000, De Renzi and Di Pellegrino, 1998 and Humphreys et al., 1993). Other studies reported preserved recognition of emotion in acquired prosopagnosia ( Bruyer et al., 1983, Cole and Perez-Cruet, 1964, Mattson et al., 2000, Sergent and Villemure, 1989, Shuttleworth et al., 1982, Tranel et al., 1988 and Young et al., 1993). In addition, as pointed out by Calder and Young (2005) and Calder (2011), the decoding of face identity, as well as facial expressions of emotion, activates a similar network of regions in the occipitotemporal cortex. Facial expression impairments in patients are often correlated with a deficit to decode emotions from other modalities, which suggests a general, multimodal deficit in those patients, rather than a selective impairment of facial expression representations. In addition, a better understanding of the patients' information use (i.e., representations) for both tasks is necessary to clearly understand the very nature of the deficits in the face processing system ( Calder and Young, 2005 and Calder, 2011). Consequently, the question of dissociation between the identity and expression systems with acquired cases of prosopagnosia remains unclear. To address this issue, we tested PS – a pure case of acquired prosopagnosia. PS is a 64-year-old woman (born in 1950) who sustained a closed-head injury in 1992. PS shows normal object recognition (e.g., Busigny et al., 2010 and Rossion et al., 2003) and relies on atypical cues to determine the identity of a person, such as voice, clothes, or other salient non-face features (e.g., glasses, haircut, beard, posture). She has major lesions on the left mid-ventral and the right inferior occipital cortex. Minor lesions of the left posterior cerebellum and the right middle temporal gyrus were also detected (for a complete anatomical description see, Rossion, 2008 and Sorger et al., 2007), whereas the regions that are assumed to be critical for the decoding of emotional expressions (i.e., the amygdala, the insula and the pSTS) are anatomically spared. Note that even if the occipital temporal regions are not playing a central role for facial expression decoding, the right inferior occipital gyrus is damaged in PS and represents the entry level for expression and identity in posited neuroanatomical models (Haxby et al., 2000 and Pitcher, 2014). Thus, it remains to be clarified whether these lesions in the patient have also an impact on the processing of facial expressions. Of interest, we previously used a response classification technique – Bubbles – to reveal the diagnostic information used by PS for face identification ( Caldara et al., 2005). Bubbles is a response classification technique sampling the information in 3-D space (2D image × spatial frequencies) ( Gosselin & Schyns, 2001), to present sparse versions of the faces as stimuli. Observers categorize the sparse stimuli, and Bubbles keeps track of the samples of information that lead to correct and incorrect identification responses. From this information, we can establish how each region of the input space contributed to face identification performance and depict the diagnostic information used to effectively decode the stimulus. In contrast to healthy observers, PS did not use information from the eye region to identify familiar faces, but instead the lower part of the face, including the mouth and the external contours. To sum up, PS's well-established bias to use information from the mouth to identify faces and her anatomical neural dissociation provide a unique opportunity to probe the existence of a dichotomy in the representations used for facial identity and expression categorization. Here, we first assessed her categorization performance of the six facial expressions of emotion using the classical Ekman and Friesen (1976) FACS (Facial Action Coded System) static face database. The FACS provides an anatomical taxonomy of the human muscles activated during the transmission of facial expressions of emotion ( Ekman & Friesen, 1978), by quantifying facial movements for every expression in terms of so-called Action Units (AUs – each of them relating to a particular muscle). We then modelled PS's 3D dynamic mental representations of the six classic facial expressions by using a dynamic FACS-based Generative Face Grammar (GFG, see Fig. 1, the methods section and Yu, Garrod, & Schyns, 2012) on the AUs combined with a reverse correlation technique (see the methods and also Jack, Caldara, et al., 2012). The use of dynamic facial expressions provides a more ecologically valid approach to study the perception and processing of facial expressions, as our natural environment is surrounded with dynamic, temporal and multimodal information ( Johnston, Mayes, Hughes, & Young, 2013). Pertinently, it has also been recently demonstrated that there is a causal involvement of the right pSTS in the processing of dynamic facial information ( Pitcher et al., 2014), a region anatomically spared in PS

نتیجه گیری انگلیسی

Conclusions The adequate categorization of facial expressions is a critical feature for adaptive social interactions. Our general goal was to understand whether face information used for identity and emotional expression categorization tap into common or distinct representational systems. We isolated information use for facial expressions in a pure case of acquired prosopagnosia with a lesion encompassing the right inferior occipital gyrus. PS's reconstructed mental models showed a normal use of all of the face features and muscles (i.e., AUs of the FACS coding system) for the representation of facial expressions, with the exception of fear. This is in stark contrast with the suboptimal information she uses for retrieving face identity (i.e., the mouth and the external contours). These data suggest that the face system does not rely on a unique representational system to code face features for identity and expression, or at least it relies on distinct cortical pathways to access them, flexibly adapting to visual and task constraints. In addition, our observations indicate that those cortical routes are modulated by the use of dynamic information, which facilitates the correct categorization of facial expressions in the patient. The inferior occipital gyrus plays a critical role for the decoding of static images, and the patient presents a selective impairment in the decoding of static expressions. On the contrary, the patient shows normal performance to effectively decode facial expressions from dynamic faces. The pSTS, which is spared in the patient, would be sufficient to effectively achieve this task. This result reinforces the view of the existence of a cortical pathway carrying out directly face signals from the early visual cortex to the pSTS, thus providing novel insights on the normal face operating system. Altogether, our data question also the conclusions obtained with patients by using unnatural static images and emphasize the need for a future neuroimaging study on the same patient to consolidate and provide a fine-grained picture of the present findings.