اعتبار، صحت و حساسیت درمان در مقیاس رتبه بندی اسکیزوفرنی شناخت
کد مقاله | سال انتشار | تعداد صفحات مقاله انگلیسی |
---|---|---|
30190 | 2015 | 9 صفحه PDF |
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : European Neuropsychopharmacology, Volume 25, Issue 2, February 2015, Pages 176–184
چکیده انگلیسی
Cognitive functioning can be assessed with performance-based assessments such as neuropsychological tests and with interview-based assessments. Both assessment methods have the potential to assess whether treatments for schizophrenia improve clinically relevant aspects of cognitive impairment. However, little is known about the reliability, validity and treatment responsiveness of interview-based measures, especially in the context of clinical trials. Data from two studies were utilized to assess these features of the Schizophrenia Cognition Rating Scale (SCoRS). One of the studies was a validation study involving 79 patients with schizophrenia assessed at 3 academic research centers in the US. The other study was a 32-site clinical trial conducted in the US and Europe comparing the effects of encenicline, an alpha-7 nicotine agonist, to placebo in 319 patients with schizophrenia. The SCoRS interviewer ratings demonstrated excellent test-retest reliability in several different circumstances, including those that did not involve treatment (ICC> 0.90), and during treatment (ICC>0.80). SCoRS interviewer ratings were related to cognitive performance as measured by the MCCB (r=−0.35), and demonstrated significant sensitivity to treatment with encenicline compared to placebo (P<.001). These data suggest that the SCoRS has potential as a clinically relevant measure in clinical trials aiming to improve cognition in schizophrenia, and may be useful for clinical practice. The weaknesses of the SCoRS include its reliance on informant information, which is not available for some patients, and reduced validity when patient׳s self-report is the sole information source.
مقدمه انگلیسی
Cognitive impairment in schizophrenia has traditionally been assessed with performance-based cognitive measures (Chapman and Chapman, 1973). Many of these measures were derived from tests developed to assess neurocognitive function for the identification of strengths and weaknesses in patients with brain dysfunction or intellectual impairment, or for examining the effects of aging (Spreen and Strauss, 1998). More recently, tests measuring highly specific cognitive processes, often developed for neuroimaging paradigms, have been utilized as well (Barch et al., 2009). However, there are multiple practical constraints on the assessment of cognition conducted exclusively with performance-based tests. Most clinicians who might wish to evaluate the severity of cognitive impairment in their patients with schizophrenia do not have the required expertise and resources to conduct meaningful performance-based assessments. Furthermore, the interpretation of the clinical relevance of changes in performance-based measures is not immediately accessible to non-experts, including clinicians, consumers, and family members, and may require different approaches or supplemental assessments with greater face validity. Finally, there is no consensus among experts as to how much change on neuropsychological tests is clinically meaningful. Regulatory bodies such as the United States Food and Drug Administration (FDA) and the European Medicines Agency (EMA) support the use of cognitive performance measures as primary endpoints in clinical trials for the treatment of cognitive impairment in schizophrenia. However, they have also noted the absence of face validity of performance-based cognitive measures as one of the reasons they require a pharmacologic treatment also to demonstrate efficacy on an endpoint that has greater clinical meaning to clinicians and consumers. These indices could include performance-based measures of functional capacity or interview-based assessments of clinically relevant and easily detectable cognitive change (Buchanan et al., 2005 and Buchanan et al., 2011). In addition, assuming that some treatments become available, clinicians will need an assessment that they can utilize to assess cognitive change in their patients in situations where performance-based cognitive tests are not practically available. Interview-based assessments have the potential to meet these requirements. Several interview-based measures of cognition are available. The two that have been utilized the most in large-scale studies with adequate methods, such as the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) project, have been the Schizophrenia Cognition Rating Scale (SCoRS) and the Cognitive Assessment Interview (CAI). These measures examine cognitive functioning through questions about functionally relevant, cognitively demanding tasks. As a result, they measure cognitive functioning from a different perspective than performance-based assessments, and a full overlap with performance-based measures is not expected. We will focus in this paper on research recently completed with the SCoRS. Information on the SCoRS׳ psychometric properties, relationship to cognitive functioning, as well as other measures of functional capacity, can be found in a variety of peer-reviewed publications, including Keefe et al. (2006), Green et al. (2008), and Harvey et al. (2011). Overall, the strengths of the SCoRS are its brief administration time, requiring about 15 min per interview (Keefe et al., 2006 and Green et al., 2008); its relation to real-world functioning (Keefe et al., 2006); good test-retest reliability; and correlations with at least some performance-based measures of cognition (Keefe et al., 2006). However, several challenges remain. Due to the difficulties that patients with schizophrenia have with reporting accurate information regarding cognition and everyday functioning (Bowie et al., 2006 and Sabbag et al., 2011; also Durand et al., this issue), the validity of the SCoRS and its correlations with performance-based measures of cognition may depend upon the availability of an informant. Since some patients with schizophrenia may not have people who know them well (Patterson et al., 1996 and Bellack et al., 2007), requirements for informant information may reduce the practicality of the SCoRS. It is important to determine the contexts in which informant information is required and whether there are circumstances where it is not. Also, while the US FDA has expressed general acceptance of interview-based measures of cognition as secondary endpoints in clinical trials for drugs to improve cognitive impairment in schizophrenia (Buchanan et al., 2005 and Buchanan et al., 2011) and the SCoRS in particular is being used as a co-primary endpoint in phase 3 registration clinical trials (www.clinicaltrials.gov, accessed May 9, 2014), the effect of treatment on the SCoRS is not well known. Finally, if SCoRS and similar measures are to be useful for clinical applications, it may be helpful to begin to gather information on the reliability and sensitivity of specific items of the SCoRS for the purposes of reducing the length of the assessment down to its crucial components. In this paper, we will address the following questions about the SCoRS: 1. What is the structure of the SCoRS items? Do the items measure a single factor or multiple factors? Based upon correlations with cognitive performance measures such as the MATRICS Consensus Cognitive Battery (MCCB), assessment of the reliability of items, and treatment responsiveness, are there opportunities for data reduction? 2. What is the relative benefit of informant information given the potential time and resource cost and unavailability of reliable informants? a. What is the relative reliability of different sources of information? b. What is the relative association of data from different sources with cognitive performance measures such as the MCCB? c. What is the relative sensitivity to treatment of data from the different sources? 3. Are there differences in the reliability, validity and sensitivity of the SCoRS based upon geographical region and level of expertise and experience with the instrument?