تنوع بین فردی داخل به عنوان شاخص تمارض در ضربه به سر
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|38155||2002||22 صفحه PDF||سفارش دهید||10014 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Archives of Clinical Neuropsychology, Volume 17, Issue 5, July 2002, Pages 423–444
The utility of various measures of malingering was evaluated using an analog design in which half the participants (composed of three groups: naive healthy people, professionals working with head-injured people, individuals who suffered a head injury but not currently in litigation) were asked to try their best and the remainder was asked to feign believable injury. Participants were assessed with the Reliable Digit Span (RDS) task, the Victoria Symptom Validity Test (VSVT), and the Computerized Dot Counting Test (CDCT) on three separate occasions in order to determine whether repeat administration of tests improves prediction. The results indicated that regardless of an individual's experience, consideration of both level of performance (particularly on forced-choice symptom validity tasks) and intraindividual variability holds considerable promise for the detection of malingering.
Neuropsychologists are often asked to evaluate the likelihood that an individual is malingering cognitive deficits. Accurate diagnosis is critical because of the high individual and systemic costs of both false-negative and false-positive errors (Slick, Sherman, & Iverson, 1999). Two broad approaches have been used for detecting malingering in the neuropsychological context (Spreen & Strauss, 1998). The first relies on examination of indices derived from conventional neuropsychological measures. The other approach involves the use of tests, which have been specially designed for this purpose. There are a wide array of procedures for detecting malingering that fall under the rubric of examining the pattern of performance on traditional tests. For example, potentially useful indices have been developed by comparing performance on easy and hard items or by evaluating performance curves across multiple items of varying difficulty (e.g., Baker et al., 1993, Frederick et al., 2000 and Tenhula & Sweet, 1996), by examining serial position effects in list learning tasks (e.g., Bernard, 1991 and Suhr et al., 1997; but see Iverson, Franzen, & McCracken, 1991), by examining recall, recognition hits, and discriminability on list recall tasks (e.g., Bernard, 1991, Binder et al., 1993, Coleman et al., 1998, Millis, 1994, Millis et al., 1995, Slick et al., 2000, Suhr et al., 1997, Suhr & Gunstad, 2000 and Sweet et al., 2000), by evaluating performance on implicit memory tests (e.g., Davis et al., 1995 and Horton et al., 1992; but see Hanley, Baker, & Ledson, 1999), by examining magnitude of errors (Martin, Franzen, & Orey, 1996), by assessing Digit Span (e.g., Beetar & Williams, 1995, Greiffenstein et al., 1994, Strauss et al., 1999 and Suhr et al., 1997), and by comparing indices of attention to indices of memory (e.g., Mittenberg, Azrin, Millsaps, & Heilbronner, 1993) and semantic knowledge (e.g., Mittenberg, Theroux-Fichera, Heilbronner, & Zielinski, 1995). Overall, attempts to develop malingering indices from conventional neuropsychological tests have met with varying degrees of success, and the consensus is that they may not be sufficiently effective in identifying malingering (e.g, Curtiss & Vanderploeg, 2000, Rosenfeld et al., 2000, Suhr et al., 1997, Tenhula & Sweet, 1996 and Van Gorp et al., 1999). A second approach to the detection of suboptimal performance has focused on the development of tests specially designed to identify aspects of performance suggestive of feigning. They are generally of two types. One type consists of forced-choice symptom validity tasks (e.g., Binder, 1993, Hiscock & Hiscock, 1989, Iverson et al., 1991, Slick et al., 1997 and Tombaugh, 1996) that rely upon a probabilistic analysis of patient performance. Scores that are above or below a large (90% or more) confidence interval around chance performance are highly unlikely to be the product of random responding (in either case depending on intactness of function), with the latter being indicative of exaggerated or faked deficits. Normative cutoff scores have also been derived for most symptom validity tests. The second type of task relies on the production of unusual responses, e.g., reading wrong letters or counting dots incorrectly (e.g., Boone et al., 2000 and Rey, 1941; and reported in Lezak, 1995). There is some evidence to suggest that these special techniques, particularly forced-choice symptom validity tests, achieve the best hit rates for the detection of noncompliance (e.g., Greiffenstein et al., 1994, Rose et al., 1998 and Strauss et al., 1999). However, they have consistently shown only moderate sensitivity, even when normative cutoff scores rather than below-chance scores are used. Thus, scores within the valid range on these tests do not rule out malingering and are therefore not suitable for use as the sole index of patient veracity (Slick et al., 1999). As a result, a number of authors (e.g., Nies & Sweet, 1994, Slick et al., 1999 and Spreen & Strauss, 1998) have recommended that clinicians adopt multiple approaches in the diagnostic process. In the personal injury context, malingering typically involves the exaggeration or fabrication of injuries/deficits in an attempt to obtain financial compensation American Psychiatric Association, 1994 and Slick et al., 1999. The adversarial and competitive nature of litigation has resulted in increased sophistication regarding the assessment of malingering and avoidance of detection. Access to information about the nature of brain impairment and the tests used to evaluate these phenomena may substantially alter the ability to detect malingering Coleman et al., 1998 and Youngjohn et al., 1999. Simulation studies (e.g., Coleman et al., 1998, DiCarlo et al., 2000, Martin et al., 1993, Rose et al., 1998 and Suhr & Gunstad, 2000; but see Rapport, Farchione, Coleman, & Axelrod, 1998) have shown that coaching (e.g., providing information regarding the common deficits associated with head injury, warning about the presence of measures to detect malingering) can undermine the efficacy of malingering indices. In general, neuropsychological test performance of coached malingerers (typically healthy university students) tends to be more like the test performance of real patients than that of naive, uncoached malingerers. However, even coached malingerers tend to exaggerate their neuropsychological deficits relative to patients with legitimate brain injuries.