عدم تقارن نیمکره غربی در حواس پرتی شنوایی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|38745||2010||9 صفحه PDF||سفارش دهید||8309 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Brain and Cognition, Volume 74, Issue 2, November 2010, Pages 79–87
Abstract Serial-verbal short-term memory is impaired by irrelevant sound, particularly when the sound changes acoustically (the changing-state effect). In contrast, short-term recall of semantic information is impaired only by the semanticity of irrelevant speech, particularly when it is semantically related to the target memory items (the between-sequence semantic similarity effect). Previous research indicates that the changing-state effect is larger when the sound is presented to the left ear in comparison to the right ear, the left ear disadvantage. In this paper, we report a novel finding whereby the between-sequence semantic similarity effect is larger when the irrelevant speech is presented to the right ear in comparison to the left ear, but this right ear disadvantage is found only when meaning is the basis of recall (Experiments 1 and 3), not when order is the basis of recall (Experiment 2). Our results complement previous research on hemispheric asymmetry effects in cross-modal auditory distraction by demonstrating a role for the left hemisphere in semantic auditory distraction.
. Introduction Evolution has shaped the brain’s hemispheres into two functionally specialised processing systems (Kinsbourne, 1970). One source of evidence for hemispheric specialisation comes from the demonstration of a number of deficits (and syndromes) associated with the language functions of brain-damaged patients (Searleman, 1977). Another source of evidence for hemispheric specialisation comes from the finding that the auditory system has stronger contralateral than ipsilateral pathways that results in sound such as speech being processed predominantly by the opposite hemisphere to its presentation source. For example, input to the right ear has privileged access to the left hemisphere which plays a dominant role in the processing of linguistic information, and input to the left ear has privileged access to the right hemisphere which plays a more subservient role in linguistic processing and a more dominant role in non-linguistic processing (such as the processing of changes in complex auditory patterns; Shankweiler, 1966 and Tzourio et al., 1998). This is thought to result in the right ear advantage found in studies of linguistic sound processing and the left ear advantage found in studies of non-linguistic sound processing ( Hugdahl et al., 2009, Poeppel et al., 2004 and Tervaniemi and Hugdahl, 2003). These ear advantages have been demonstrated for to-be-attended sound. In the present article, we explore hemispheric asymmetry in the processing of to-be-ignored sound in a visual–verbal task setting (i.e., cross-modal auditory distraction). 1.1. The changing-state effect and right hemisphere processing Short-term verbal memory for the correct serial order of a set of sequentially presented visual items (visual–verbal serial recall) is markedly impaired by the mere presence of background sound that participants are explicitly instructed to ignore. Two key signatures of this irrelevant sound effect are that the to-be-ignored sound must change acoustically from one sound element to the next ( Jones & Macken, 1993) and that the focal task must require serial rehearsal (seriation) of the to-be-recalled (TBR) items ( Beaman and Jones, 1997 and Hughes et al., 2007). If the participants are required to recall the items in serial order, changing-state sound sequences (e.g., “a b a b a b a”) are invariably more disruptive than steady-state sound sequences (e.g., “a a a a a a a”). This is called the changing-state effect. While the acoustic properties of the sound are endowed with disruptive power in the visual–verbal serial recall setting, the meaning of the sound is relatively impotent ( Buchner et al., 1996, Jones and Macken, 1993 and Tremblay et al., 2000; but see Buchner, Rothermund, Wentura, & Mehl, 2004). These observations are in line with the view that the changing-state effect is a function of the similarity between two sets of order processes: The deliberate processing of the order of the TBR items and the involuntary processing of the order between successive and perceptually discrete sound events (for a review, see Macken, Tremblay, Alford, & Jones, 1999). Evidence of a hemispheric bias in cross-modal auditory distraction in the context of serial recall comes from recent studies by Hadlington and colleagues (Hadlington et al., 2004 and Hadlington et al., 2006). They found that the changing-state effect is larger when the sound is presented to the left ear only, compared with when the sound is presented either to the right ear only or to both ears. This finding was coined the left ear disadvantage and was only manifest when the task required serial recall ( Hadlington et al., 2006). Since more efficient obligatory processing of change in a sound stream results in greater disruption of serial recall ( Macken, Phelps, & Jones, 2009), the left ear disadvantage suggests that the right hemisphere plays a prominent role in processing acoustic features of irrelevant sound streams (see also Grimshaw et al., 2003, Poeppel et al., 2004 and Zatorre et al., 1994). In other words, the right hemisphere’s specialisation in processing the order between successive sound events turns into a disadvantage when sound is to-be-ignored and the task-goal requires order processing. 1.2. The between-sequence semantic similarity effect and left hemisphere processing In contrast to the effects of sound on serial order processes (e.g., visual–verbal serial recall), the mere meaning of sound (i.e., when speech is used) can indeed contribute to disruption of tasks that require or encourage semantic processing for efficient performance (Marsh et al., 2008, Marsh et al., 2009, Oswald et al., 2000 and Sörqvist, 2010a). In particular, Marsh et al., 2008 and Marsh et al., 2009 have shown that when meaning is the basis of retrieval, rather than serial order, the semanticity of irrelevant speech is more disruptive than its acoustic properties. For example, to-be-ignored words produce more disruption than non-words or reversed-words when the focal task requires semantic processing (for a review, see Marsh & Jones, in press). Marsh et al. have employed an experimental paradigm in which each experimental trial involves visually-presented TBR exemplars that are members of the same semantic category (e.g., Fruit). During some trials, the participants are also presented with to-be-ignored spoken words that are either taken from the same semantic category as the TBR items (e.g., other Fruit) or from a different semantic category (e.g., Tools). Three findings from this research are of particular interest here: First, recall is poorer in the semantically related condition, the between-sequence semantic similarity effect. Second, this effect arises only when participants are instructed to recall the TBR words in any order (free recall): The effect is not found when participants attempt to recall words according to their order of presentation (serial recall). Third, the participants tend to recall the semantically related irrelevant words by mistake, even though they are instructed to ignore items presented in the auditory modality. Semantic auditory distraction thus embodies (a) an effect of mere meaningfulness (words produce more disruption than non-words), (b) a between-sequence semantic similarity effect (words related to the TBR items produce more disruption than unrelated words), and (c) promotion of intrusions from non-target items by speech semantically related to TBR items. The findings seem to be accommodated most easily within an interference-by-process approach to auditory distraction, similar to that applied to the changing-state effect ( Marsh et al., 2008 and Marsh et al., 2009). The interference-by-process view explains semantic auditory distraction in terms of (a) deliberate inhibition of non-target competitors activated by speech which spreads to target items and thus impairs recall and (b) breakdown of source-monitoring (i.e., a failure to keep track of the source of target and non-target items). Beaman, Bridges, and Scott (2007) concluded in a recent review that auditory distraction and the right ear advantage in dichotic listening are based on different mechanisms. If the two phenomena were mediated by the same mechanism, the right ear advantage observed in dichotic listening—due to stronger contralateral than ipsilateral connections and dominant linguistic processing in the left hemisphere—should be accompanied with a right ear disadvantage in auditory distraction (i.e., greater magnitude of disruption from irrelevant sound when presented to the right ear). In contrast, Hadlington et al., 2006 and Hadlington et al., 2004 studies compellingly show that the changing-state effect is larger when the sound is presented to the left ear. However, the right ear advantage in dichotic listening may suggest a right ear disadvantage in semantic auditory distraction particularly because both concern linguistic processing. Moreover, semantic auditory distraction can be modified by attentional control processes ( Beaman, 2004, Beaman et al., 2007, Sörqvist et al., 2010 and Sörqvist et al., 2010) similar to the right ear advantage in dichotic listening ( Hugdahl et al., 2009), but the changing-state effect cannot ( Beaman, 2004, Beaman et al., 2007, Elliott and Cowan, 2005 and Sörqvist, 2010b; see Sörqvist, in press for a review). These similarities lean towards the possibility that the left hemisphere’s advantage in linguistic processing turns into a disadvantage when semantic information conveyed by sound has to be deliberately ignored. A relatively large body of neuroscientific evidence lends credence to this hypothesis, demonstrating dominant semantic processing of speech sound in the left hemisphere (e.g., Beaman et al., 2007, Scott et al., 2009 and Zahn et al., 2000) and interhemispheric inhibition of speech presented to the left ear ( Bloom and Hynd, 2005, Clark et al., 1993 and Westerhausen and Hugdahl, 2008). Hence, with left ear input, the speech sound’s capacity to interfere with the semantic processes in the left hemisphere should be attenuated. On the other hand, with right ear input, the semantic analysis of the speech is more readily conducted and should thus increase disruption. We therefore expected to find a greater between-sequence semantic similarity effect when speech is presented to the right compared to the left ear.
نتیجه گیری انگلیسی
6. Conclusion The right hemisphere’s specialisation in acoustic order processing turns into a disadvantage when the task requires order processing and the sound is to be deliberately ignored as evidenced by a left ear disadvantage in previous studies. In this paper, we extend this pattern of hemispheric asymmetries by showing that the left hemisphere’s specialisation in linguistic processing turns into a disadvantage when the task requires semantic processing as evidenced by a right ear disadvantage in semantic auditory distraction