بازنمایی همآوا: شواهد بیشتر از اصلاح نام پریشی
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|29992||2008||18 صفحه PDF||سفارش دهید||12487 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Cortex, Volume 44, Issue 3, March 2008, Pages 276–293
This paper compares two theoretical positions regarding the mental representation of homophones: first, that homophones have one phonological word form but two grammatical representations (lemmas, e.g., Levelt et al., 1999; Dell, 1990), or second, that they have two separate phonological word forms (e.g., Caramazza et al., 2001). The adequacy of these two theoretical accounts for explaining the pattern of generalisation obtained in the treatment of homophone naming in aphasia is investigated. Two single cases are presented, where phonological treatment techniques are used to improve word retrieval. Treatment comprised picture naming of one member of a homophone pair using a phonological cueing hierarchy. A significant improvement in word retrieval was found for both the treated and the untreated homophones, while there was no improvement for phonologically and semantically related controls. It is argued that the data support a shared representation for homophones at the word form level. However, current theories cannot explain the pattern of generalisation found without the addition of a mechanism for repetition priming (e.g., suggested by Wheeldon and Monsell, 1992) and feedback between word form and lemmas to explain the results.
In the psycholinguistic literature, there are different theories regarding how the mental lexicon is organised for speech production. In this paper, we will focus on two approaches. The first approach is the discrete Two-Stage model (Levelt et al., 1999) in which lexicalisation occurs in two steps: (i) a syntactic representation (the lemma) has to be accessed before and (ii) the phonological word form can be activated and selected. The second approach is that of the Independent Network (IN) model (Caramazza, 1997) where there is only one lexical layer, and phonological information can be accessed before grammatical information is fully activated. In the following, the focus will be on ambiguous words: homophones are words which sound the same, but have two or more different meanings. They can be homographic, where the spelling is the same (e.g., ‘ball’ and ‘ball’) or heterographic, where spelling differs (e.g., ‘knight’ and ‘night’). Levelt et al. (1999) assume a single representation for homophones at the word form level (e.g., ‘ball’ has two entries at lemma level but only one at homophone level), whereas Caramazza et al. (2001) postulate two separate word form entries (e.g., ‘ball’ has two separate entries at word form level). We address the issue of the representation of homophones by using data from the treatment of aphasia. If we train one partner of a homophone pair but not the other and generalisation to the untrained homophone word form is detected after training, then this is support for models with a single, shared word form entry for homophones. If we find no generalisation to the untreated homophone, support for models with separate homophone word forms is found. We will first review the literature regarding homophone production and demonstrate the important role homophones can play in specifying theories of spoken word production. 2. Psycholinguistic debate regarding the representation of homophones Dell (1990) analysed speech errors made by normal speakers and showed that the frequency of both content and function words determines how error-prone they are. However, unlike other words, low-frequency homophones are no more error-prone than high-frequency homophones. For homophones, the important determinant of error rate seemed to be the summed or cumulative frequency1 of both homophones rather than the item-specific frequency of one homophone. Dell argued, therefore, that low-frequency homophones ‘inherit’ the frequency of their high-frequency twin. “It appears to be the frequency of the form of the word rather than that of the word itself that influences its sound errors. A low-frequency word that is homophonous with a high-frequency word may inherit the relative invulnerability of the high-frequency homophone by sharing its form” (Dell, 1990, p. 326). This is implemented in Dell's (1990) interactive Two-Step-model (see Fig. 1) in which an activated lemma (semantic–syntactic) level results in a spread of activation to the adjacent phonological level (where the phonological word forms are represented as single nodes). The word form level in turn feeds back activation to the lemma level, and forwards its activation to the segment level. The homophonic word form can therefore activate its second lemma meaning via feedback.2 Full-size image (7 K) Fig. 1. Representation of homophones in the lexicon according to Dell (1990). Figure options Converging evidence for the frequency inheritance effect was found by Jescheniak and Levelt (1994). They used an English–Dutch translation task, where the Dutch translation of the English word was a homophone.3 In this task, Jescheniak and Levelt (1994) demonstrated that low-frequency homophones had almost identical translation times to their high-frequency twins. In contrast, a clear-cut effect of frequency on translation time was found for high and low-frequency non-homophones. Jescheniak et al. (2003) extended these findings by replicating the English–Dutch translation task and adding an English–German translation task. In both languages they found a significantly faster translation time for the homophone condition (regardless of whether the translation response was for the high- or low-frequency partner) compared with the low-frequency non-homophone condition4: the high-frequency non-homophone condition (matched on cumulative homophone frequency) showed a similar translation time to the homophone condition. Hence, they provided a cross-linguistic replication of the basic finding that low-frequency homophones are named significantly faster than low-frequency controls (see also Cutting and Ferreira, 1999). The authors explain their findings using a discrete Two-Stage model, where one homophone entry is activated from two lemma entries (see Fig. 2a): the frequency of the (single) phonological representation of a homophone represents the frequency of occurrence of both homophone meanings. Full-size image (10 K) Fig. 2. Representation of homophones in the lexicon: The Two-Stage model and the IN model. Figure options However, in a translation task with Spanish–English bilinguals, Caramazza et al. (2001) could not replicate the frequency inheritance effect for homophones. Instead, they found a clear-cut effect of frequency for homophones (low-frequency homophones were slower than high-frequency homophones), as for non-homophonic words. Moreover, they found significantly slower naming latencies in an English picture naming task5 for English homophones than for English control words matched for cumulative frequency. The authors inferred that specific-word frequency (rather than cumulative frequency) must be the critical variable in homophone production. These results were also replicated in Mandarin (Caramazza et al., 2001, but see Jescheniak et al., 2003 for a critique of this paper). Caramazza, 1997 and Caramazza et al., 2001 and Caramazza and Miozzo (1998) reject, therefore, the existence of a homophone processing advantage and argue against the need for an independent lemma representation. They propose direct access from semantics to the phonological word form level, where each homophone meaning has a separate word form node (see Fig. 2b). In their IN model, while it is possible to activate syntactic information independently via semantics (indicated by dotted lines in the original model), it seems to be necessary that syntactic information gets additional activation from the phonological and/or orthographic word form after the word form, indicating that syntax is primarily selected after phonology has been selected. A weak phonology mediation is postulated by this model, but at the same time, an independent activation of word form and syntactic information from semantics are possible. While the IN model (e.g., Caramazza, 1997) is strictly feed forward, in their discussion of the homophone frequency inheritance effect found by Jescheniak and Levelt (1994) and Jescheniak et al. (2003), Caramazza and Miozzo (1998) suggest an adaptation to this architecture. They suggest the addition of feedback from segment to form level. This version of the IN model with feedback would be able to explain the frequency inheritance effect. This rests on the fact that while homophones do not share word forms, separate word forms will activate the same phonemes. Bonin and Fayol (2002) found no evidence of a homophone processing advantage caused by frequency inheritance effect. They measured response time for spoken and written picture naming of French heterographic homophones. In both modalities, there were significantly slower responses for low-frequency homophones than for high-frequency homophones. However, the evidence would have been even stronger if the authors included a low-frequency non-homophone control condition matched to the low-frequency homophone in order to show that low-frequency homophones are not only produced slower than high-frequency homophones, but that they are also produced as slowly as low-frequency non-homophones. Unfortunately, no such control condition was included. In a second experiment, participants had to categorise the same pictures used in Experiment 1 as “artificial” or “naturally” occurring objects. As low-frequency homophones were categorised faster than high-frequency homophones, Bonin and Fayol excluded a conceptual/semantic source for the slower/less accurate processing. Miozzo et al. (2004) used naming performance in anomia to investigate the representation of homophones. They combine the frequency argument with aphasic response types. In their first experiment, naming performance for low-frequency homophones was compared with naming performance for low- and high-frequency controls. The response accuracy for homophones was similar to the low-frequency controls; therefore, the authors concluded that a homophone behaves according to its specific-word frequency and interpreted it as evidence for the independent representation account for homophones. Miozzo and colleagues also performed the reverse experiment. Here they found that high-frequency homophones showed the same accuracy in naming as their high-frequency controls rather than their low-frequency partners and low-frequency controls. Once again, this is evidence for the independent representation account. Although using aphasic naming performance as a measure, the critical variable in these experiments is still frequency. Spinelli and Alario (2002) used a different paradigm to investigate homophone production in French. They used only those homophones whose different meaning, but identical phonological forms are associated with different genders (e.g., French has two possibilities: ‘le’ as a masculine determiner and ‘la’ as a feminine determiner). They examined whether having a context marked for gender would have an effect on semantic priming compared to an unmarked context. They used heterographic homophones, which differ in gender (e.g., /sel/: le sel (salt), la selle (saddle)) as auditory primes in cross-modal semantic priming tasks. When the prime was presented without an article (e.g., /sel/), it was ambiguous as to which homophone meaning it referred to. In this condition, there was semantic priming of lexical decision to words semantically related to both meanings of the homophone (e.g., cheval (horse) and poivre (pepper)). In contrast, when the homophone prime was presented with the article (a gender-marked context, e.g., /la sel/ (saddle)), lexical decision was faster only for the prime related (gender congruent) meaning (e.g., cheval (horse)). Spinelli and Alario (2002) argue that the gender context effect is incompatible with a theory where these effects arise at a word form level and where homophones share a single form representation. Instead, they advocate an account similar to that of Caramazza and colleagues' IN model (see earlier). However, the authors do not attempt to explain their data in a theory, which has a single word form representation and a lemma level. In such a theory, the gender-specified prime would activate the lemma for the homophone with congruent gender, which in turn activates the conceptual representation for that lemma alone, resulting in semantic priming that is restricted to one homophone. In contrast, when no gender context is provided, the phonological form will activate both lemmas and both conceptual representations, with resultant priming for both members of the pair. Hence, contrary to their claims, Spinelli and Alario's data are also consistent with a theory where homophones share a single word form representation, if we assume an additional syntactic (lemma) level.
نتیجه گیری انگلیسی
This paper presents two people with aphasia who showed generalisation from treatment of one of a pair of homophones to the other, but not to phonologically related stimuli. These findings replicate in English the findings of Biedermann et al. (2002) in German. These data have been discussed in three different models: the Two-Stage model of Levelt et al. (1999), the Dell model (1990) and the IN model (Caramazza, 1997). Although the Two-Stage model and the IN model do not hold sufficient mechanisms to explain the generalisation of homophones without feedback, we were able to explain our findings in a modified version of these two models. None of the models – including the Dell model – implement long-term repetition priming, but if we assume repetition priming can be implemented, and following Wheeldon and Monsell's (1992) explanation that it occurs in the links to the word form, then Dell's model (1990) can account for our data naturally with a single word form representation for homophones, using the same feedback mechanism as provided for the account of the frequency inheritance effect. However, the IN model and the Two-Stage model remain unable to account for the data. Levelt et al.'s model, with a single word form representation for homophones, would require modification: either to include word form to lemma feedback (as suggested by Dell), or for repetition priming effects to occur at the word form level. The IN model, with two word form representations for homophones, would require feedback to the phoneme level (but this predicts generalisation to phonologically related stimuli). In addition, like Levelt et al.'s model, it would also require repetition priming effects to be located at the level of the word form or additional feedback from word form to semantics. This paper has its strength in taking a new perspective to the investigation of the representation of homophones. We have changed the focus of research to evidence which does not rely on frequency for its argument and have instead used impaired word form access in aphasia to investigate the representation of homophones. This allows us to circumvent the controversial use of the presence or absence of frequency inheritance effects in homophones and the locus of frequency effects. Investigation of the homophone issue using data from aphasia can be taken as support for the use of treatment as a source of empirical evidence in the evaluation of theories of language processing (Nickels, 2002). The outcome of this paper favors a single homophone representation at the word form level as explanation for its results.