تشخیص صدای برتر در یک بیمار با پروزوپاگنوزیا اکتسابی و ادراکپریشی شی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|37900||2010||8 صفحه PDF||سفارش دهید||6841 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Neuropsychologia, Volume 48, Issue 13, November 2010, Pages 3725–3732
Abstract Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life.
1. Introduction Everyday social interaction is typically a multisensory experience in which we see and speak with the people with whom we interact. Seeing a person's face and hearing their voice provides rich information about gender, age or other idiosyncratic features that give rise to a unique person identity. How face and voice information are integrated in person identity recognition has been outlined in a model by Campanella and Belin (2007). They propose that face and voice integration occurs in two interactive streams that share a common functional organization. Face–voice integration requires analyzing different types of sensory information such as face and voice structural information, affective information and identity information. Interaction of face and voice information occurs through crosstalk between ‘unimodal’ face and voice modules and integration of face and voice information for person identification may occur at a higher order ‘supramodal’ person identity module. Consistent with this parallel and interactive model, behavioural studies exploring identity recognition have shown that previous exposure to both face and voice information during person encoding facilitates later identification of that individual when cues from only one sensory modality (face or voice) are available (Ellis et al., 1997, Schweinberger et al., 1997, Sheffert and Olson, 2004 and von Kriegstein et al., 2008). On a functional level, neuroimaging studies have shown crossmodal responses for familiar voices in a putatively face selective cortical region, the fusiform face area (FFA), and these responses are coupled with activation in voice selective cortical regions, the superior temporal sulcus (STS), when the task specifically requires recognizing familiar speakers (von Kriegstein and Giraud, 2006 and von Kriegstein et al., 2005). While multisensory encoding of identity can facilitate unimodal recognition some researchers have found interference effects with bimodal recognition. Joassin, Maurage, Bruyer, Crommelinck, & Campanella (2004) demonstrated that both reaction time and accuracy were compromised when participants identified previously learned face–voice pairings with bimodal (face and voice) stimuli. Bimodal stimulus presentations resulted in intermediate performance compared to the unimodal (visual-only and auditory-only) conditions. The authors suggested that because face recognition is superior to voice recognition, in the bimodal stimulus presentation the presence of the voice actually interferes with the efficient processing of the face. In a later paper, Joassin, Maurage, & Campanella (2008) demonstrated that when face stimuli were degraded and therefore less reliable relative to the voices, bimodal stimulus presentations led to an enhancement effect. It seems likely that higher order multisensory person identity (face/voice) information is integrated in the same statistically optimal fashion as lower level multisensory audiovisual stimuli (lights and tones) whereby the more reliable or salient information is weighted more heavily (e.g. Alais and Burr, 2004 and Shams et al., 2002). This would mean that the more reliable sensory information (face or voice) would have greater influence on higher order person recognition processes. Given that we typically have access to multimodal information, if an individual has a selective deficit in one sensory modality, will the remaining sensory modalities compensate for this loss? One such case to consider is prosopagnosia, a neurological deficit that impairs an individual's ability to visually recognize the identity of a face (Damasio, Damasio, & Van Hoesen, 1982; see also Ellis & Florence, 1990 for a review of Bodamer, 1947). Acquired prosopagnosia is the result of injury or disease and lesions within the known face-selective network have been demonstrated such as the occipital face area (OFA) (Rossion et al., 2003, Steeves et al., 2006 and Steeves et al., 2009) or the FFA (Lê et al., 2002 and Rossion et al., 2003). Despite being unable to recognize a face, individuals with prosopagnosia may nonetheless retain the ability to categorize faces from non-faces or objects (Steeves et al., 2006) and their visual scan paths for face images suggest that they may retain implicit face processing (Lê, Raufaste, Roussel, Puel, & Démonet, 2003). Given the inability to use face information for identity recognition, how do individuals with prosopagnosia identify people in daily life? People can be recognized by a variety of means such as their hair or clothing but of course these aspects of a person's identity are unreliable and may change on a daily basis. A potentially more reliable strategy for person identity recognition is to use voices. The use of voices for recognition has been described anecdotally in patients with acquired prosopagnosia since the mid 1950's (Damasio et al., 1990, De Renzi, 1997 and Pallis, 1955). In the present paper, we examine the use of auditory information for recognition processes in a patient with acquired prosopagnosia and visual object agnosia. Specifically, we ask how well is voice information used for person identification and sound information for object recognition. We outline two possible outcomes: One: acquired prosopagnosia may result in enhanced voice recognition ability. This would be consistent with a model for crossmodal (audiovisual) adaptation and/or compensatory strategies for the loss of proper use of one sensory modality. A loss of face recognition does not necessarily mean that other recognition systems will also be impaired. For instance, brain lesioned patients have been described that have impairments that are specific to one domain without impacting others. Specifically, some patients have been described to have impairments in the recognition of faces or voices but not both faces and voices (Neuner & Schweinberger, 2000). Consequently, since face and voice impairments can be independent and given that healthy individuals typically weigh more heavily the sensory information from the more reliable, dominant sensory modality then it is not unreasonable to expect a patient with prosopagnosia to excel at using auditory information for person recognition. Similarly, acquired object agnosia may result in enhanced object sound recognition if visual and auditory streams for object recognition can operate independently and as a result of the visual object agnosia, the auditory information is more reliable for the recognition of objects. Two: acquired prosopagnosia may result in impaired voice recognition ability. In a previous study of speaker recognition, von Kriegstein, Kleinschmidt, and Giraud (2006) reported the case of an individual with developmental prosopagnosia who showed poorer recognition of familiar speakers’ voices compared to neurologically intact controls. They further showed that individuals with prosopagnosia did not benefit from the presentation of dynamic face and voice pairs when learning person identities compared to the presentation of voices with an image representing the speakers’ occupation (von Kriegstein et al., 2008). The controls, however, did show improvements in speaker recognition after having learned voices and faces together, which suggests that speaker recognition benefits from the ability to process faces. Their data support a bottom-up approach to person identity recognition whereby the coupling of the lower level face and voice modules influences person identity recognition. In a similar manner, acquired object agnosia may result in impaired object sound recognition if both visual and auditory sound encoding must combine in a bottom-up fashion to generate object recognition and the visual object recognition stream is impaired. In Experiment 1, we quantify person recognition for faces, voices and face–voice pair combinations using a classic old/new paradigm in a patient with acquired prosopagnosia compared to controls. In Experiment 2, we evaluate the identification of another stimulus class, object recognition for cars, car horns and car–car horn pair combinations. Since patient SB shows implicit face processing through his scan paths for faces, the bottom-up model of face and voice encoding would predict that his voice recognition would also be impaired. If, however, patient SB's implicit face processing operates independently of voice processing, the crossmodal compensation model would predict that voice recognition will be enhanced relative to controls. Similarly for objects, on one hand a bottom-up model of object and object sound encoding would predict that object, sound recognition will be impaired relative to controls. On the other hand, the crossmodal compensation model would predict that object sound recognition will be enhanced relative to controls.