دانلود مقاله ISI انگلیسی شماره 33491
ترجمه فارسی عنوان مقاله

تمرین مبتنی بر شواهد در لکنت زبان: برخی از پرسش ها برای در نظر گرفتن

عنوان انگلیسی
Evidence-based practice in stuttering: Some questions to consider
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
33491 2005 26 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Journal of Fluency Disorders, Volume 30, Issue 3, 2005, Pages 163–188

ترجمه کلمات کلیدی
عمل مبتنی بر شواهد - درمان -
کلمات کلیدی انگلیسی
Evidence-based practice; Treatment; Empirically-supported principles of change; Lidcombe
پیش نمایش مقاله
پیش نمایش مقاله  تمرین مبتنی بر شواهد در لکنت زبان: برخی از پرسش ها برای در نظر گرفتن

چکیده انگلیسی

A recent forum in JFD (28/3, 2003) evaluated the status of evidence-based practice in fluency disorders, and offered recommendations for improvement. This article re-evaluates the level of support available for some popular approaches to stuttering therapy and questions the relative value placed on some types of programs endorsed by the forum. Evidence-based practice is discussed within the context of emerging concerns over its application to non-medical interventions and suggestions are made for grounding fluency interventions by reference to empirically supported principles of change. A popular, evidence-based intervention for stuttering in young children (the Lidcombe program) is evaluated within the suggested parameters.

مقدمه انگلیسی

In a recent issue of JFD (28, 3) (hereafter, 28/3), a number of authors ( Bothe, 2003, Finn, 2003, Ingham, 2003 and Onslow, 2003) proposed certain standards to which stuttering treatment should be held and further suggested that only a limited number of therapy programs currently meet such evidence-based practice (EBP) standards. In the same series of articles, some authors appeared to imply that treatment programs not meeting the specified standards, and the clinicians who administer them, may in fact be engaging in less than ethical clinical practice, since they volitionally forgo a small set of “validated” techniques for those seemingly supported by a lesser evidence base. In this space, I would like to further discuss such implications as well as related, seemingly provocative issues. I will also address larger issues in evaluating the degree to which EBP is currently fully “ready for prime time” implementation in the field of fluency disorders. In doing so, I will frame my comments in the form of a series of questions that I think we need to ask and consider answering before applying some of the extended principles of EBP to the field of stuttering intervention. On the surface, EBP is a noble concept and goal. Indeed, it would seem nonsensical to argue for therapeutic practice that is not based on some body of evidence. I will not, therefore, position myself as saying that EBP is wrong. At the same time, certain embodiments and extensions of EBP seem less obviously of value and may in fact pose difficulties for researchers, clinicians and their patients. In framing this article partially as a response to the authors in 28/3, I prefer to start with areas of agreement. I wholeheartedly concur with the obvious need for practitioners to document the rationale for their selection of therapy approaches. As such, I also most wholeheartedly agree with the general consensus of the authors in the issue that we need far more research into therapeutic efficacy in stuttering treatment. Thus, I agree with Ingham (2003) that researchers need to develop more interest in therapy trials, and that our funding agencies, particularly the American National Institutes of Health, need to invest in them more aggressively. It is possible to conjecture that the lack of funded research in therapy efficacy is at least in part due to the relative paucity of such applications when weighed against the bulk of submissions that propose basic research questions. Despite these strong areas of agreement, however, my affinity for some, if not many, of the arguments raised in the issue begins to wane considerably, because they raise a number of vexing questions. I address what I view to be the most important of these questions in the remainder of this article. Among the issues that I will consider are: 1) The nature and scope of “evidence” and its relationship to clinical practice; 2) The limitations that may be associated with the use of a single framework to implement EBP across medicine and the many health-related professions; 3) The role of different types of evidence in determining the value of specific therapy approaches in stuttering; 4) The role of theory in evaluating treatment approaches; 5) Potential barriers to the gathering of clinical evidence and its implementation by practitioners; and finally, 6) Some logical “next steps” that will be required if practitioners and researchers are to bridge the perceived gaps between evidence and practice in stuttering treatment. 1. What is evidence? Is evidence a “fuzzy category”? A “fuzzy” category in psychology or linguistics is one that seems to have an identity that can be agreed upon, but has features that are difficult to specify exactly (Rosch, 1973; Rosch & Mervis, 1975). Fuzziness has also been applied to the evaluation of data, as in fuzzy logic (Zadeh, 1965). In the past 30 years, “fuzziness” has spilled over to accounts of logical decision-making in the physical, biological and social sciences, indicating that one person's data may or may not be sufficient to be useful to another's evaluation of a set of facts or features. In much the same way that semantics has argued about the boundaries that separate cups from bowls, it is not clear that any field can arrive at a perfect definition of evidence, or list of its features, although some will find it easier to define operationally, such as pharmaceutical interventions in medicine. Narrow criteria going into an evidence-based test of intervention effectiveness will improve the likelihood that professionals will agree on the value of the evidence that is produced. For example, if the question is whether a drug lowers blood pressure, there are only a few accepted measures of outcome. In medicine, many questions which could appear simple are complicated by the “messiness” of the typical patient, who rarely presents with a single problem, or canonical features. Thus, the problem of treating the “whole person” appears to create a sense of fuzziness in dealing with evidence—to what extent will carefully gathered treatment evidence bear a reasonable relationship to the actual case under consideration? How are treatment outcomes in one domain related to the overall functioning of the individual in other domains? This fact has generated a certain degree of tension between certain groups in medicine regarding the importance of personal experience with well-defined individuals in one's application of evidentiary meta-analyses derived from large groups (Pope, 2003). There is a growing body of reports in which physicians, for example, argue that the complexity of individual profiles supercedes evidence from carefully controlled trials. Thus, there is growing documentation that some medical practitioners “resist” evidence, when, in fact, they appear to allocate value to different types of evidence in making individual clinical decisions. An important issue in such resistance is not so much a devaluation of new evidence, but personal experience with previous existing treatments that “work” for the clients one typically sees. This type of evidence can be called “anecdotal” as various authors in 28/3 note, but is powerful when the clinician is both gatherer and applier of the data, something quite different from anecdotal evidence associated with authority of peers or other respected professionals (so-called “authority-based” evidence; Onslow, 2003). One could say that there is evidence that one reads about and evidence that one encounters in everyday practice.