Recent advancements in computational modelling have opened new ways to understand the underlying mechanisms of cognitive deficits resulting from brain damage. These implementations have followed interactive activation and parallel distributed processing approaches and the computational potential of more transparent and serial cognitive architectures has not received much attention. We tested the ability of the discrete two-stage word production model to account for naming response patterns in aphasia. Our implementation is a simple local connectionist model with two serially ordered networks representing lexical–semantic and phoneme levels. Anomic deficits were simulated by manipulating three parameters: noise within the two networks leading to semantically and phonologically based errors, and between-level threshold leading to no responses. Comparisons to actual naming data from 10 aphasics showed that this simple model provides a rather good fit to a variety of clinically observed naming response patterns. However, the existence of output-type semantic naming errors in some aphasics call for a modified discrete two-stage model which would possess monitoring mechanisms for lexical output.
One of the key assumptions in cognitive neuropsychology is the transparency of a patient’s symptomatology. This means that a careful analysis of performance patterns can guide us in defining the disrupted cognitive component(s) and their nature7, 8 and 9. Even though transparency can be a complicated issue2 and 15, many lexical processing models with “boxes-and-arrows” imply a rather straightforward relationship between a symptom (e.g. semantic errors) and its underlying mechanism (e.g. disorder of the semantic system).
In recent computational studies of aphasic symptoms, these straightforward models have not attracted much attention and the models employed, interactive activation (IA)32 and 40 and parallel distributed processing models16, 21, 36, 37 and 38, have been non-linear. While some results derived from these studies provide important challenges to the transparency assumption, we argue that the computational potential of more simple modular architectures deserves to be studied as well. Only after such an enterprise can we pit competing models of mental architecture against each other in a more adequate way.
In this study, we examine the explanatory power of a computer implementation of a two-stage serial architecture of word production. A number of authors have proposed an organization where lexical retrieval proceeds in two serially organized stages, retrieval of lexical–semantic information followed by retrieval of lexical–phonological information (e.g.7, 25 and 30). This kind of organization has been motivated by patterns of speech errors (apparent independence of semantically vs phonologically based word substitutions; e.g.7 and 17), the existence of tip-of-the-tongue states (meaning-related information available even though the phonological form of the target cannot be retrieved at that moment3 and 4), and by experimental data (e.g. early semantic and late phonological effects in a picture-word interference paradigm[39]).
The most stringent form of the two-stage architecture was explicated by Levelt and his co-workers[31], labeled as the Discrete Two-Stage (DTS) model. The DTS model claims that in a single-word production task like picture naming, retrieval of phonological form is not initiated until a single representation (the one with highest activation) is selected at the lexical–semantic level. Thus at the first stage of word retrieval, there is only lexical–semantic activity. Correspondingly, only phonological activity is present at the second lexical retrieval stage.
The major challenge for the two-stage architecture such as the DTS model has been the Interactive Activation (IA) models (e.g.12, 18 and 44). In the IA architecture as well, semantic access precedes phonological encoding. However, lexical nodes (and corresponding phonemes) of all activated semantic candidates, not just the one with highest activation, will receive activation. In addition, some phonological activation feeds back to the semantic level before selection is made. Thus the first two stages of lexical retrieval are not independent as they are in the DTS model[28].
In the present paper, we studied the constraints of the DTS model by examining its ability to simulate anomia. This proves to be a considerable challenge given the complexity of anomic patterns in aphasia (for a review, see[15]).
The present paper is organized in four parts. First, we review computational versions of the IA model[32] and the DTS model. Second, we analyze the general constraints of our DTS model implementation in explaining aphasic naming response patterns. Third, actual naming response data[27] is used in an attempt to simulate the performance of 10 aphasic patients by our DTS model implementation. Fourth, possible evidence against the present DTS model version is searched for both in the present aphasic error corpus and in a corpus of normals’ speech errors. Such evidence would include the so-called mixed error effect (above-chance rates of word substitutions that are both semantically and phonologically related[41],[13]) and the existence of output-type semantic naming errors (production of a semantically related word even though the correct semantic information appears to be retrieved).